From patchwork Fri Sep 15 20:03:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Samoylov X-Patchwork-Id: 13387595 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4629CCD379F for ; Fri, 15 Sep 2023 20:05:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230398AbjIOUE4 (ORCPT ); Fri, 15 Sep 2023 16:04:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57704 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237041AbjIOUEZ (ORCPT ); Fri, 15 Sep 2023 16:04:25 -0400 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EEE9E211E for ; Fri, 15 Sep 2023 13:04:19 -0700 (PDT) Received: from pps.filterd (m0109333.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38FD4hGY005376 for ; Fri, 15 Sep 2023 13:04:19 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=meta.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding : content-type; s=s2048-2021-q4; bh=8Y0mC+gnKnb9Gs/njWgbYgHBv3D805PyIfZ8j+9eK4c=; b=EbltaCm1LoDkn6T/yAC/KZin9VaGmy4FF+o/hWVpMSSpTkE+jvChPjcZb7JEH+Eorurt NTTi0GYTAoEupPczIzE56VRCN/zhprpyfyyvfHvmFNtWVaY+jExeNIXBqAw6W+wxAnAc SdhucVQfx8DKIex2uIQ+wE6/u1Pxebzr9UfNaZqN0smpTsVY6EB8HKggeJg/2GaKslko SyePMdQPVILcmI/xFuo1YkXkc9f8HoUedr7jUWEpvxVOGJJO+WWZ+5RfkgJXVh0j1vgh W2DlEZCpoeu3LXV5EuDHYzujLTogX3XboTPPZA1l81PPJcWP+dz5QBUeHRcaACzzJD6K Gw== Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3t4qu2msa0-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Fri, 15 Sep 2023 13:04:19 -0700 Received: from twshared0807.02.ash9.facebook.com (2620:10d:c0a8:1c::11) by mail.thefacebook.com (2620:10d:c0a8:83::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Fri, 15 Sep 2023 13:04:17 -0700 Received: by devvm5855.lla0.facebook.com (Postfix, from userid 605850) id 5749529CDEC8; Fri, 15 Sep 2023 13:04:12 -0700 (PDT) From: Maxim Samoylov To: CC: Jason Gunthorpe , Leon Romanovsky , Dennis Dalessandro , Christian Benvenuti , Bernard Metzler , Vadim Fedorenko , Maxim Samoylov Subject: [PATCH] IB: fix memlock limit handling code Date: Fri, 15 Sep 2023 13:03:53 -0700 Message-ID: <20230915200353.1238097-1-max7255@meta.com> X-Mailer: git-send-email 2.39.3 MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: EiR4xvzZmY4twiGamhX6APqgYFeNhYK5 X-Proofpoint-ORIG-GUID: EiR4xvzZmY4twiGamhX6APqgYFeNhYK5 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-15_17,2023-09-15_01,2023-05-22_02 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org This patch fixes handling for RLIM_INFINITY value uniformly across the infiniband/rdma subsystem. Currently infinity constant is treated as actual limit value, which can trigger unexpected ENOMEM errors in corner-case configurations Let's also provide the single helper to check against process MEMLOCK limit while registering user memory region mappings. Signed-off-by: Maxim Samoylov --- drivers/infiniband/core/umem.c | 7 ++----- drivers/infiniband/hw/qib/qib_user_pages.c | 7 +++---- drivers/infiniband/hw/usnic/usnic_uiom.c | 6 ++---- drivers/infiniband/sw/siw/siw_mem.c | 6 +++--- drivers/infiniband/sw/siw/siw_verbs.c | 23 ++++++++++------------ include/rdma/ib_umem.h | 11 +++++++++++ 6 files changed, 31 insertions(+), 29 deletions(-) diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index f9ab671c8eda..3b197bdc21bf 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -35,12 +35,12 @@ #include #include -#include #include #include #include #include #include +#include #include #include "uverbs.h" @@ -150,7 +150,6 @@ struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr, { struct ib_umem *umem; struct page **page_list; - unsigned long lock_limit; unsigned long new_pinned; unsigned long cur_base; unsigned long dma_attr = 0; @@ -200,10 +199,8 @@ struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr, goto out; } - lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT; - new_pinned = atomic64_add_return(npages, &mm->pinned_vm); - if (new_pinned > lock_limit && !capable(CAP_IPC_LOCK)) { + if (!ib_umem_check_rlimit_memlock(new_pinned)) { atomic64_sub(npages, &mm->pinned_vm); ret = -ENOMEM; goto out; diff --git a/drivers/infiniband/hw/qib/qib_user_pages.c b/drivers/infiniband/hw/qib/qib_user_pages.c index 1bb7507325bc..3889aefdfc6b 100644 --- a/drivers/infiniband/hw/qib/qib_user_pages.c +++ b/drivers/infiniband/hw/qib/qib_user_pages.c @@ -32,8 +32,8 @@ */ #include -#include #include +#include #include "qib.h" @@ -94,14 +94,13 @@ int qib_map_page(struct pci_dev *hwdev, struct page *page, dma_addr_t *daddr) int qib_get_user_pages(unsigned long start_page, size_t num_pages, struct page **p) { - unsigned long locked, lock_limit; + unsigned long locked; size_t got; int ret; - lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT; locked = atomic64_add_return(num_pages, ¤t->mm->pinned_vm); - if (locked > lock_limit && !capable(CAP_IPC_LOCK)) { + if (!ib_umem_check_rlimit_memlock(locked)) { ret = -ENOMEM; goto bail; } diff --git a/drivers/infiniband/hw/usnic/usnic_uiom.c b/drivers/infiniband/hw/usnic/usnic_uiom.c index 84e0f41e7dfa..fdbb9737c7f0 100644 --- a/drivers/infiniband/hw/usnic/usnic_uiom.c +++ b/drivers/infiniband/hw/usnic/usnic_uiom.c @@ -34,13 +34,13 @@ #include #include -#include #include #include #include #include #include #include +#include #include "usnic_log.h" #include "usnic_uiom.h" @@ -90,7 +90,6 @@ static int usnic_uiom_get_pages(unsigned long addr, size_t size, int writable, struct scatterlist *sg; struct usnic_uiom_chunk *chunk; unsigned long locked; - unsigned long lock_limit; unsigned long cur_base; unsigned long npages; int ret; @@ -124,9 +123,8 @@ static int usnic_uiom_get_pages(unsigned long addr, size_t size, int writable, mmap_read_lock(mm); locked = atomic64_add_return(npages, ¤t->mm->pinned_vm); - lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT; - if ((locked > lock_limit) && !capable(CAP_IPC_LOCK)) { + if (!ib_umem_check_rlimit_memlock(locked)) { ret = -ENOMEM; goto out; } diff --git a/drivers/infiniband/sw/siw/siw_mem.c b/drivers/infiniband/sw/siw/siw_mem.c index e6e25f15567d..54991ddeabc7 100644 --- a/drivers/infiniband/sw/siw/siw_mem.c +++ b/drivers/infiniband/sw/siw/siw_mem.c @@ -5,6 +5,7 @@ #include #include +#include #include #include #include @@ -367,7 +368,6 @@ struct siw_umem *siw_umem_get(u64 start, u64 len, bool writable) struct siw_umem *umem; struct mm_struct *mm_s; u64 first_page_va; - unsigned long mlock_limit; unsigned int foll_flags = FOLL_LONGTERM; int num_pages, num_chunks, i, rv = 0; @@ -396,9 +396,9 @@ struct siw_umem *siw_umem_get(u64 start, u64 len, bool writable) mmap_read_lock(mm_s); - mlock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT; - if (atomic64_add_return(num_pages, &mm_s->pinned_vm) > mlock_limit) { + if (!ib_umem_check_rlimit_memlock( + atomic64_add_return(num_pages, &mm_s->pinned_vm))) { rv = -ENOMEM; goto out_sem_up; } diff --git a/drivers/infiniband/sw/siw/siw_verbs.c b/drivers/infiniband/sw/siw/siw_verbs.c index fdbef3254e30..ad63a8db5502 100644 --- a/drivers/infiniband/sw/siw/siw_verbs.c +++ b/drivers/infiniband/sw/siw/siw_verbs.c @@ -12,6 +12,7 @@ #include #include +#include #include #include @@ -1321,8 +1322,8 @@ struct ib_mr *siw_reg_user_mr(struct ib_pd *pd, u64 start, u64 len, struct siw_umem *umem = NULL; struct siw_ureq_reg_mr ureq; struct siw_device *sdev = to_siw_dev(pd->device); - - unsigned long mem_limit = rlimit(RLIMIT_MEMLOCK); + unsigned long num_pages = + (PAGE_ALIGN(len + (start & ~PAGE_MASK))) >> PAGE_SHIFT; int rv; siw_dbg_pd(pd, "start: 0x%pK, va: 0x%pK, len: %llu\n", @@ -1338,19 +1339,15 @@ struct ib_mr *siw_reg_user_mr(struct ib_pd *pd, u64 start, u64 len, rv = -EINVAL; goto err_out; } - if (mem_limit != RLIM_INFINITY) { - unsigned long num_pages = - (PAGE_ALIGN(len + (start & ~PAGE_MASK))) >> PAGE_SHIFT; - mem_limit >>= PAGE_SHIFT; - if (num_pages > mem_limit - current->mm->locked_vm) { - siw_dbg_pd(pd, "pages req %lu, max %lu, lock %lu\n", - num_pages, mem_limit, - current->mm->locked_vm); - rv = -ENOMEM; - goto err_out; - } + if (!ib_umem_check_rlimit_memlock(num_pages + current->mm->locked_vm)) { + siw_dbg_pd(pd, "pages req %lu, max %lu, lock %lu\n", + num_pages, rlimit(RLIMIT_MEMLOCK), + current->mm->locked_vm); + rv = -ENOMEM; + goto err_out; } + umem = siw_umem_get(start, len, ib_access_writable(rights)); if (IS_ERR(umem)) { rv = PTR_ERR(umem); diff --git a/include/rdma/ib_umem.h b/include/rdma/ib_umem.h index 95896472a82b..3970da64b01e 100644 --- a/include/rdma/ib_umem.h +++ b/include/rdma/ib_umem.h @@ -11,6 +11,7 @@ #include #include #include +#include struct ib_ucontext; struct ib_umem_odp; @@ -71,6 +72,16 @@ static inline size_t ib_umem_num_pages(struct ib_umem *umem) return ib_umem_num_dma_blocks(umem, PAGE_SIZE); } +static inline bool ib_umem_check_rlimit_memlock(unsigned long value) +{ + unsigned long lock_limit = rlimit(RLIMIT_MEMLOCK); + + if (lock_limit == RLIM_INFINITY || capable(CAP_IPC_LOCK)) + return true; + + return value <= PFN_DOWN(lock_limit); +} + static inline void __rdma_umem_block_iter_start(struct ib_block_iter *biter, struct ib_umem *umem, unsigned long pgsz)