Message ID | 20200813153254.93731-3-sgarzare@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | io_uring: add restrictions to support untrusted applications and guests | expand |
Hi Stefano, I love your patch! Perhaps something to improve: [auto build test WARNING on linus/master] [also build test WARNING on v5.8 next-20200813] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/Stefano-Garzarella/io_uring-add-restrictions-to-support-untrusted-applications-and-guests/20200813-233653 base: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git dc06fe51d26efc100ac74121607c01a454867c91 config: s390-randconfig-c003-20200813 (attached as .config) compiler: s390-linux-gcc (GCC) 9.3.0 If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot <lkp@intel.com> coccinelle warnings: (new ones prefixed by >>) >> fs/io_uring.c:8516:7-14: WARNING opportunity for memdup_user vim +8516 fs/io_uring.c 8497 8498 static int io_register_restrictions(struct io_ring_ctx *ctx, void __user *arg, 8499 unsigned int nr_args) 8500 { 8501 struct io_uring_restriction *res; 8502 size_t size; 8503 int i, ret; 8504 8505 /* We allow only a single restrictions registration */ 8506 if (ctx->restricted) 8507 return -EBUSY; 8508 8509 if (!arg || nr_args > IORING_MAX_RESTRICTIONS) 8510 return -EINVAL; 8511 8512 size = array_size(nr_args, sizeof(*res)); 8513 if (size == SIZE_MAX) 8514 return -EOVERFLOW; 8515 > 8516 res = kmalloc(size, GFP_KERNEL); 8517 if (!res) 8518 return -ENOMEM; 8519 8520 if (copy_from_user(res, arg, size)) { 8521 ret = -EFAULT; 8522 goto out; 8523 } 8524 8525 for (i = 0; i < nr_args; i++) { 8526 switch (res[i].opcode) { 8527 case IORING_RESTRICTION_REGISTER_OP: 8528 if (res[i].register_op >= IORING_REGISTER_LAST) { 8529 ret = -EINVAL; 8530 goto out; 8531 } 8532 8533 __set_bit(res[i].register_op, 8534 ctx->restrictions.register_op); 8535 break; 8536 case IORING_RESTRICTION_SQE_OP: 8537 if (res[i].sqe_op >= IORING_OP_LAST) { 8538 ret = -EINVAL; 8539 goto out; 8540 } 8541 8542 __set_bit(res[i].sqe_op, ctx->restrictions.sqe_op); 8543 break; 8544 case IORING_RESTRICTION_SQE_FLAGS_ALLOWED: 8545 ctx->restrictions.sqe_flags_allowed = res[i].sqe_flags; 8546 break; 8547 case IORING_RESTRICTION_SQE_FLAGS_REQUIRED: 8548 ctx->restrictions.sqe_flags_required = res[i].sqe_flags; 8549 break; 8550 default: 8551 ret = -EINVAL; 8552 goto out; 8553 } 8554 } 8555 8556 ctx->restricted = 1; 8557 8558 ret = 0; 8559 out: 8560 /* Reset all restrictions if an error happened */ 8561 if (ret != 0) 8562 memset(&ctx->restrictions, 0, sizeof(ctx->restrictions)); 8563 8564 kfree(res); 8565 return ret; 8566 } 8567 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
On Fri, Aug 14, 2020 at 01:42:15AM +0800, kernel test robot wrote: > Hi Stefano, > > I love your patch! Perhaps something to improve: > > [auto build test WARNING on linus/master] > [also build test WARNING on v5.8 next-20200813] > [If your patch is applied to the wrong git tree, kindly drop us a note. > And when submitting patch, we suggest to use '--base' as documented in > https://git-scm.com/docs/git-format-patch] > > url: https://github.com/0day-ci/linux/commits/Stefano-Garzarella/io_uring-add-restrictions-to-support-untrusted-applications-and-guests/20200813-233653 > base: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git dc06fe51d26efc100ac74121607c01a454867c91 > config: s390-randconfig-c003-20200813 (attached as .config) > compiler: s390-linux-gcc (GCC) 9.3.0 > > If you fix the issue, kindly add following tag as appropriate > Reported-by: kernel test robot <lkp@intel.com> > > > coccinelle warnings: (new ones prefixed by >>) > > >> fs/io_uring.c:8516:7-14: WARNING opportunity for memdup_user Yeah, I think make sense. I'll use memdup_user() in the next version. > > vim +8516 fs/io_uring.c > > 8497 > 8498 static int io_register_restrictions(struct io_ring_ctx *ctx, void __user *arg, > 8499 unsigned int nr_args) > 8500 { > 8501 struct io_uring_restriction *res; > 8502 size_t size; > 8503 int i, ret; > 8504 > 8505 /* We allow only a single restrictions registration */ > 8506 if (ctx->restricted) > 8507 return -EBUSY; > 8508 > 8509 if (!arg || nr_args > IORING_MAX_RESTRICTIONS) > 8510 return -EINVAL; > 8511 > 8512 size = array_size(nr_args, sizeof(*res)); > 8513 if (size == SIZE_MAX) > 8514 return -EOVERFLOW; > 8515 > > 8516 res = kmalloc(size, GFP_KERNEL); > 8517 if (!res) > 8518 return -ENOMEM; > 8519 > 8520 if (copy_from_user(res, arg, size)) { > 8521 ret = -EFAULT; > 8522 goto out; > 8523 } > 8524 > 8525 for (i = 0; i < nr_args; i++) { > 8526 switch (res[i].opcode) { > 8527 case IORING_RESTRICTION_REGISTER_OP: > 8528 if (res[i].register_op >= IORING_REGISTER_LAST) { > 8529 ret = -EINVAL; > 8530 goto out; > 8531 } > 8532 > 8533 __set_bit(res[i].register_op, > 8534 ctx->restrictions.register_op); > 8535 break; > 8536 case IORING_RESTRICTION_SQE_OP: > 8537 if (res[i].sqe_op >= IORING_OP_LAST) { > 8538 ret = -EINVAL; > 8539 goto out; > 8540 } > 8541 > 8542 __set_bit(res[i].sqe_op, ctx->restrictions.sqe_op); > 8543 break; > 8544 case IORING_RESTRICTION_SQE_FLAGS_ALLOWED: > 8545 ctx->restrictions.sqe_flags_allowed = res[i].sqe_flags; > 8546 break; > 8547 case IORING_RESTRICTION_SQE_FLAGS_REQUIRED: > 8548 ctx->restrictions.sqe_flags_required = res[i].sqe_flags; > 8549 break; > 8550 default: > 8551 ret = -EINVAL; > 8552 goto out; > 8553 } > 8554 } > 8555 > 8556 ctx->restricted = 1; > 8557 > 8558 ret = 0; > 8559 out: > 8560 /* Reset all restrictions if an error happened */ > 8561 if (ret != 0) > 8562 memset(&ctx->restrictions, 0, sizeof(ctx->restrictions)); > 8563 > 8564 kfree(res); > 8565 return ret; > 8566 } > 8567 > > --- > 0-DAY CI Kernel Test Service, Intel Corporation > https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
On Thu, Aug 13, 2020 at 05:32:53PM +0200, Stefano Garzarella wrote: > +/* > + * io_uring_restriction->opcode values > + */ > +enum { > + /* Allow an io_uring_register(2) opcode */ > + IORING_RESTRICTION_REGISTER_OP, > + > + /* Allow an sqe opcode */ > + IORING_RESTRICTION_SQE_OP, > + > + /* Allow sqe flags */ > + IORING_RESTRICTION_SQE_FLAGS_ALLOWED, > + > + /* Require sqe flags (these flags must be set on each submission) */ > + IORING_RESTRICTION_SQE_FLAGS_REQUIRED, > + > + IORING_RESTRICTION_LAST > +}; Same thought on enum literals, but otherwise, looks good: Reviewed-by: Kees Cook <keescook@chromium.org>
On Wed, Aug 26, 2020 at 12:46:24PM -0700, Kees Cook wrote: > On Thu, Aug 13, 2020 at 05:32:53PM +0200, Stefano Garzarella wrote: > > +/* > > + * io_uring_restriction->opcode values > > + */ > > +enum { > > + /* Allow an io_uring_register(2) opcode */ > > + IORING_RESTRICTION_REGISTER_OP, > > + > > + /* Allow an sqe opcode */ > > + IORING_RESTRICTION_SQE_OP, > > + > > + /* Allow sqe flags */ > > + IORING_RESTRICTION_SQE_FLAGS_ALLOWED, > > + > > + /* Require sqe flags (these flags must be set on each submission) */ > > + IORING_RESTRICTION_SQE_FLAGS_REQUIRED, > > + > > + IORING_RESTRICTION_LAST > > +}; > > Same thought on enum literals, but otherwise, looks good: Sure, I'll fix the enum in the next version. > > Reviewed-by: Kees Cook <keescook@chromium.org> Thanks for the review, Stefano
diff --git a/fs/io_uring.c b/fs/io_uring.c index 1ec25ee71372..cb365e6e0af7 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -98,6 +98,8 @@ #define IORING_MAX_FILES_TABLE (1U << IORING_FILE_TABLE_SHIFT) #define IORING_FILE_TABLE_MASK (IORING_MAX_FILES_TABLE - 1) #define IORING_MAX_FIXED_FILES (64 * IORING_MAX_FILES_TABLE) +#define IORING_MAX_RESTRICTIONS (IORING_RESTRICTION_LAST + \ + IORING_REGISTER_LAST + IORING_OP_LAST) struct io_uring { u32 head ____cacheline_aligned_in_smp; @@ -219,6 +221,13 @@ struct io_buffer { __u16 bid; }; +struct io_restriction { + DECLARE_BITMAP(register_op, IORING_REGISTER_LAST); + DECLARE_BITMAP(sqe_op, IORING_OP_LAST); + u8 sqe_flags_allowed; + u8 sqe_flags_required; +}; + struct io_ring_ctx { struct { struct percpu_ref refs; @@ -231,6 +240,7 @@ struct io_ring_ctx { unsigned int cq_overflow_flushed: 1; unsigned int drain_next: 1; unsigned int eventfd_async: 1; + unsigned int restricted: 1; /* * Ring buffer of indices into array of io_uring_sqe, which is @@ -338,6 +348,7 @@ struct io_ring_ctx { struct llist_head file_put_llist; struct work_struct exit_work; + struct io_restriction restrictions; }; /* @@ -6353,6 +6364,19 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req, if (unlikely(sqe_flags & ~SQE_VALID_FLAGS)) return -EINVAL; + if (unlikely(ctx->restricted)) { + if (!test_bit(req->opcode, ctx->restrictions.sqe_op)) + return -EACCES; + + if ((sqe_flags & ctx->restrictions.sqe_flags_required) != + ctx->restrictions.sqe_flags_required) + return -EACCES; + + if (sqe_flags & ~(ctx->restrictions.sqe_flags_allowed | + ctx->restrictions.sqe_flags_required)) + return -EACCES; + } + if ((sqe_flags & IOSQE_BUFFER_SELECT) && !io_op_defs[req->opcode].buffer_select) return -EOPNOTSUPP; @@ -8650,6 +8674,76 @@ static int io_unregister_personality(struct io_ring_ctx *ctx, unsigned id) return -EINVAL; } +static int io_register_restrictions(struct io_ring_ctx *ctx, void __user *arg, + unsigned int nr_args) +{ + struct io_uring_restriction *res; + size_t size; + int i, ret; + + /* We allow only a single restrictions registration */ + if (ctx->restricted) + return -EBUSY; + + if (!arg || nr_args > IORING_MAX_RESTRICTIONS) + return -EINVAL; + + size = array_size(nr_args, sizeof(*res)); + if (size == SIZE_MAX) + return -EOVERFLOW; + + res = kmalloc(size, GFP_KERNEL); + if (!res) + return -ENOMEM; + + if (copy_from_user(res, arg, size)) { + ret = -EFAULT; + goto out; + } + + for (i = 0; i < nr_args; i++) { + switch (res[i].opcode) { + case IORING_RESTRICTION_REGISTER_OP: + if (res[i].register_op >= IORING_REGISTER_LAST) { + ret = -EINVAL; + goto out; + } + + __set_bit(res[i].register_op, + ctx->restrictions.register_op); + break; + case IORING_RESTRICTION_SQE_OP: + if (res[i].sqe_op >= IORING_OP_LAST) { + ret = -EINVAL; + goto out; + } + + __set_bit(res[i].sqe_op, ctx->restrictions.sqe_op); + break; + case IORING_RESTRICTION_SQE_FLAGS_ALLOWED: + ctx->restrictions.sqe_flags_allowed = res[i].sqe_flags; + break; + case IORING_RESTRICTION_SQE_FLAGS_REQUIRED: + ctx->restrictions.sqe_flags_required = res[i].sqe_flags; + break; + default: + ret = -EINVAL; + goto out; + } + } + + ctx->restricted = 1; + + ret = 0; +out: + /* Reset all restrictions if an error happened */ + if (ret != 0) + memset(&ctx->restrictions, 0, sizeof(ctx->restrictions)); + + kfree(res); + return ret; +} + static bool io_register_op_must_quiesce(int op) { switch (op) { @@ -8696,6 +8790,18 @@ static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode, if (ret) { percpu_ref_resurrect(&ctx->refs); ret = -EINTR; + goto out_quiesce; + } + } + + if (ctx->restricted) { + if (opcode >= IORING_REGISTER_LAST) { + ret = -EINVAL; + goto out; + } + + if (!test_bit(opcode, ctx->restrictions.register_op)) { + ret = -EACCES; goto out; } } @@ -8759,15 +8865,19 @@ static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode, break; ret = io_unregister_personality(ctx, nr_args); break; + case IORING_REGISTER_RESTRICTIONS: + ret = io_register_restrictions(ctx, arg, nr_args); + break; default: ret = -EINVAL; break; } +out: if (io_register_op_must_quiesce(opcode)) { /* bring the ctx back to life */ percpu_ref_reinit(&ctx->refs); -out: +out_quiesce: reinit_completion(&ctx->ref_comp); } return ret; diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h index cdc98afbacc3..be54bc3cf173 100644 --- a/include/uapi/linux/io_uring.h +++ b/include/uapi/linux/io_uring.h @@ -267,6 +267,7 @@ enum { IORING_REGISTER_PROBE, IORING_REGISTER_PERSONALITY, IORING_UNREGISTER_PERSONALITY, + IORING_REGISTER_RESTRICTIONS, /* this goes last */ IORING_REGISTER_LAST @@ -295,4 +296,34 @@ struct io_uring_probe { struct io_uring_probe_op ops[0]; }; +struct io_uring_restriction { + __u16 opcode; + union { + __u8 register_op; /* IORING_RESTRICTION_REGISTER_OP */ + __u8 sqe_op; /* IORING_RESTRICTION_SQE_OP */ + __u8 sqe_flags; /* IORING_RESTRICTION_SQE_FLAGS_* */ + }; + __u8 resv; + __u32 resv2[3]; +}; + +/* + * io_uring_restriction->opcode values + */ +enum { + /* Allow an io_uring_register(2) opcode */ + IORING_RESTRICTION_REGISTER_OP, + + /* Allow an sqe opcode */ + IORING_RESTRICTION_SQE_OP, + + /* Allow sqe flags */ + IORING_RESTRICTION_SQE_FLAGS_ALLOWED, + + /* Require sqe flags (these flags must be set on each submission) */ + IORING_RESTRICTION_SQE_FLAGS_REQUIRED, + + IORING_RESTRICTION_LAST +}; + #endif
The new io_uring_register(2) IOURING_REGISTER_RESTRICTIONS opcode permanently installs a feature allowlist on an io_ring_ctx. The io_ring_ctx can then be passed to untrusted code with the knowledge that only operations present in the allowlist can be executed. The allowlist approach ensures that new features added to io_uring do not accidentally become available when an existing application is launched on a newer kernel version. Currently is it possible to restrict sqe opcodes, sqe flags, and register opcodes. IOURING_REGISTER_RESTRICTIONS can only be made once. Afterwards it is not possible to change restrictions anymore. This prevents untrusted code from removing restrictions. Suggested-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> --- v3: - added IORING_RESTRICTION_SQE_FLAGS_ALLOWED and IORING_RESTRICTION_SQE_FLAGS_REQUIRED - removed IORING_RESTRICTION_FIXED_FILES_ONLY RFC v2: - added 'restricted' flag in the ctx [Jens] - added IORING_MAX_RESTRICTIONS define - returned EBUSY instead of EINVAL when restrictions are already registered - reset restrictions if an error happened during the registration --- fs/io_uring.c | 112 +++++++++++++++++++++++++++++++++- include/uapi/linux/io_uring.h | 31 ++++++++++ 2 files changed, 142 insertions(+), 1 deletion(-)