From patchwork Wed Mar 23 13:49:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiaoguang Wang X-Patchwork-Id: 12789783 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7878C4321E for ; Wed, 23 Mar 2022 13:49:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244515AbiCWNvQ (ORCPT ); Wed, 23 Mar 2022 09:51:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49672 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244494AbiCWNvP (ORCPT ); Wed, 23 Mar 2022 09:51:15 -0400 Received: from out30-45.freemail.mail.aliyun.com (out30-45.freemail.mail.aliyun.com [115.124.30.45]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 967BC46B31; Wed, 23 Mar 2022 06:49:44 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R551e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04394;MF=xiaoguang.wang@linux.alibaba.com;NM=1;PH=DS;RN=4;SR=0;TI=SMTPD_---0V80FjZB_1648043381; Received: from localhost(mailfrom:xiaoguang.wang@linux.alibaba.com fp:SMTPD_---0V80FjZB_1648043381) by smtp.aliyun-inc.com(127.0.0.1); Wed, 23 Mar 2022 21:49:42 +0800 From: Xiaoguang Wang To: linux-scsi@vger.kernel.org, target-devel@vger.kernel.org Cc: linux-block@vger.kernel.org, bostroesser@gmail.com Subject: [PATCH v2 1/3] scsi: target: tcmu: Fix possible page UAF Date: Wed, 23 Mar 2022 21:49:38 +0800 Message-Id: <20220323134940.31463-2-xiaoguang.wang@linux.alibaba.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20220323134940.31463-1-xiaoguang.wang@linux.alibaba.com> References: <20220323134940.31463-1-xiaoguang.wang@linux.alibaba.com> Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org tcmu_try_get_data_page() looks up pages under cmdr_lock, but it don't take refcount properly and just return page pointer. When tcmu_try_get_data_page() returns, the returned page may have been freed by tcmu_blocks_release(), need to get_page() under cmdr_lock to avoid concurrent tcmu_blocks_release(). Reviewed-by: Bodo Stroesser Signed-off-by: Xiaoguang Wang --- drivers/target/target_core_user.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c index 7b2a89a67cdb..06a5c4086551 100644 --- a/drivers/target/target_core_user.c +++ b/drivers/target/target_core_user.c @@ -1820,6 +1820,7 @@ static struct page *tcmu_try_get_data_page(struct tcmu_dev *udev, uint32_t dpi) mutex_lock(&udev->cmdr_lock); page = xa_load(&udev->data_pages, dpi); if (likely(page)) { + get_page(page); mutex_unlock(&udev->cmdr_lock); return page; } @@ -1876,6 +1877,7 @@ static vm_fault_t tcmu_vma_fault(struct vm_fault *vmf) /* For the vmalloc()ed cmd area pages */ addr = (void *)(unsigned long)info->mem[mi].addr + offset; page = vmalloc_to_page(addr); + get_page(page); } else { uint32_t dpi; @@ -1886,7 +1888,6 @@ static vm_fault_t tcmu_vma_fault(struct vm_fault *vmf) return VM_FAULT_SIGBUS; } - get_page(page); vmf->page = page; return 0; } From patchwork Wed Mar 23 13:49:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiaoguang Wang X-Patchwork-Id: 12789782 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88BC5C4167D for ; Wed, 23 Mar 2022 13:49:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244519AbiCWNvR (ORCPT ); Wed, 23 Mar 2022 09:51:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49712 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244508AbiCWNvP (ORCPT ); Wed, 23 Mar 2022 09:51:15 -0400 Received: from out30-43.freemail.mail.aliyun.com (out30-43.freemail.mail.aliyun.com [115.124.30.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 590984706A; Wed, 23 Mar 2022 06:49:45 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R961e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04400;MF=xiaoguang.wang@linux.alibaba.com;NM=1;PH=DS;RN=4;SR=0;TI=SMTPD_---0V80FjZU_1648043382; Received: from localhost(mailfrom:xiaoguang.wang@linux.alibaba.com fp:SMTPD_---0V80FjZU_1648043382) by smtp.aliyun-inc.com(127.0.0.1); Wed, 23 Mar 2022 21:49:43 +0800 From: Xiaoguang Wang To: linux-scsi@vger.kernel.org, target-devel@vger.kernel.org Cc: linux-block@vger.kernel.org, bostroesser@gmail.com Subject: [PATCH v2 2/3] scsi: target: tcmu: Fix possible data corruption Date: Wed, 23 Mar 2022 21:49:39 +0800 Message-Id: <20220323134940.31463-3-xiaoguang.wang@linux.alibaba.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20220323134940.31463-1-xiaoguang.wang@linux.alibaba.com> References: <20220323134940.31463-1-xiaoguang.wang@linux.alibaba.com> Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org When tcmu_vma_fault() gets one page successfully, before the current context completes page fault procedure, find_free_blocks() may run in and call unmap_mapping_range() to unmap this page. Assume when find_free_blocks() completes its job firstly, previous page fault procedure starts to run again and completes, then one truncated page has beed mapped to use space, but note that tcmu_vma_fault() has gotten one refcount for this page, so any other subsystem won't use this page, unless later the use space addr is unmapped. If another command runs in later and needs to extends dbi_thresh, it may reuse the corresponding slot to previous page in data_bitmap, then thouth we'll allocate new page for this slot in data_area, but no page fault will happen again, because we have a valid map, real request's data will lose. To fix this issue, when extending dbi_thresh, we'll need to call unmap_mapping_range() to unmap use space data area which may exist, which I think it's a simple method. Filesystem implementations will also run into this issue, but they ususally lock page when vm_operations_struct->fault gets one page, and unlock page after finish_fault() completes. In truncate sides, they lock pages in truncate_inode_pages() to protect race with page fault. We can also have similar codes like filesystem to fix this issue. Signed-off-by: Xiaoguang Wang --- drivers/target/target_core_user.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c index 06a5c4086551..9196188504ec 100644 --- a/drivers/target/target_core_user.c +++ b/drivers/target/target_core_user.c @@ -862,6 +862,7 @@ static int tcmu_alloc_data_space(struct tcmu_dev *udev, struct tcmu_cmd *cmd, if (space < cmd->dbi_cnt) { unsigned long blocks_left = (udev->max_blocks - udev->dbi_thresh) + space; + loff_t off, len; if (blocks_left < cmd->dbi_cnt) { pr_debug("no data space: only %lu available, but ask for %u\n", @@ -870,6 +871,10 @@ static int tcmu_alloc_data_space(struct tcmu_dev *udev, struct tcmu_cmd *cmd, return -1; } + off = udev->data_off + (loff_t)udev->dbi_thresh * udev->data_blk_size; + len = cmd->dbi_cnt * udev->data_blk_size; + unmap_mapping_range(udev->inode->i_mapping, off, len, 1); + udev->dbi_thresh += cmd->dbi_cnt; if (udev->dbi_thresh > udev->max_blocks) udev->dbi_thresh = udev->max_blocks; From patchwork Wed Mar 23 13:49:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiaoguang Wang X-Patchwork-Id: 12789784 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8BBCC433F5 for ; Wed, 23 Mar 2022 13:49:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244520AbiCWNvS (ORCPT ); Wed, 23 Mar 2022 09:51:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49766 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244514AbiCWNvQ (ORCPT ); Wed, 23 Mar 2022 09:51:16 -0400 Received: from out30-45.freemail.mail.aliyun.com (out30-45.freemail.mail.aliyun.com [115.124.30.45]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3A2C247396; Wed, 23 Mar 2022 06:49:46 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R251e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04400;MF=xiaoguang.wang@linux.alibaba.com;NM=1;PH=DS;RN=4;SR=0;TI=SMTPD_---0V80Aqnw_1648043383; Received: from localhost(mailfrom:xiaoguang.wang@linux.alibaba.com fp:SMTPD_---0V80Aqnw_1648043383) by smtp.aliyun-inc.com(127.0.0.1); Wed, 23 Mar 2022 21:49:43 +0800 From: Xiaoguang Wang To: linux-scsi@vger.kernel.org, target-devel@vger.kernel.org Cc: linux-block@vger.kernel.org, bostroesser@gmail.com Subject: [PATCH v2 3/3] scsi: target: tcmu: Use address_space->invalidate_lock Date: Wed, 23 Mar 2022 21:49:40 +0800 Message-Id: <20220323134940.31463-4-xiaoguang.wang@linux.alibaba.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20220323134940.31463-1-xiaoguang.wang@linux.alibaba.com> References: <20220323134940.31463-1-xiaoguang.wang@linux.alibaba.com> Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Currently tcmu_vma_fault() uses udev->cmdr_lock to avoid concurrent find_free_blocks(), which unmaps idle pages and truncates them. This work is really like many filesystem's truncate operations, but they use address_space->invalidate_lock to protect race. This patch replaces cmdr_lock with address_space->invalidate_lock in tcmu fault procedure, which will also make page-fault have concurrency. Signed-off-by: Xiaoguang Wang --- drivers/target/target_core_user.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c index 9196188504ec..81bfa553cc67 100644 --- a/drivers/target/target_core_user.c +++ b/drivers/target/target_core_user.c @@ -1820,13 +1820,14 @@ static int tcmu_find_mem_index(struct vm_area_struct *vma) static struct page *tcmu_try_get_data_page(struct tcmu_dev *udev, uint32_t dpi) { + struct address_space *mapping = udev->inode->i_mapping; struct page *page; - mutex_lock(&udev->cmdr_lock); + filemap_invalidate_lock_shared(mapping); page = xa_load(&udev->data_pages, dpi); if (likely(page)) { get_page(page); - mutex_unlock(&udev->cmdr_lock); + filemap_invalidate_unlock_shared(mapping); return page; } @@ -1836,7 +1837,7 @@ static struct page *tcmu_try_get_data_page(struct tcmu_dev *udev, uint32_t dpi) */ pr_err("Invalid addr to data page mapping (dpi %u) on device %s\n", dpi, udev->name); - mutex_unlock(&udev->cmdr_lock); + filemap_invalidate_unlock_shared(mapping); return NULL; } @@ -3116,6 +3117,7 @@ static void find_free_blocks(void) loff_t off; u32 pages_freed, total_pages_freed = 0; u32 start, end, block, total_blocks_freed = 0; + struct address_space *mapping; if (atomic_read(&global_page_count) <= tcmu_global_max_pages) return; @@ -3139,6 +3141,8 @@ static void find_free_blocks(void) continue; } + mapping = udev->inode->i_mapping; + filemap_invalidate_lock(mapping); end = udev->dbi_max + 1; block = find_last_bit(udev->data_bitmap, end); if (block == udev->dbi_max) { @@ -3146,6 +3150,7 @@ static void find_free_blocks(void) * The last bit is dbi_max, so it is not possible * reclaim any blocks. */ + filemap_invalidate_unlock(mapping); mutex_unlock(&udev->cmdr_lock); continue; } else if (block == end) { @@ -3159,10 +3164,11 @@ static void find_free_blocks(void) /* Here will truncate the data area from off */ off = udev->data_off + (loff_t)start * udev->data_blk_size; - unmap_mapping_range(udev->inode->i_mapping, off, 0, 1); + unmap_mapping_range(mapping, off, 0, 1); /* Release the block pages */ pages_freed = tcmu_blocks_release(udev, start, end - 1); + filemap_invalidate_unlock(mapping); mutex_unlock(&udev->cmdr_lock); total_pages_freed += pages_freed;