From patchwork Tue Nov 19 20:55:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brian Johannesmeyer X-Patchwork-Id: 13880538 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE31CD6C29B for ; Tue, 19 Nov 2024 20:55:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3CED56B0082; Tue, 19 Nov 2024 15:55:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 357326B0083; Tue, 19 Nov 2024 15:55:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 136526B0088; Tue, 19 Nov 2024 15:55:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id EB1216B0082 for ; Tue, 19 Nov 2024 15:55:36 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 9E2C1ADD06 for ; Tue, 19 Nov 2024 20:55:36 +0000 (UTC) X-FDA: 82804049244.12.F273B43 Received: from mail-ej1-f46.google.com (mail-ej1-f46.google.com [209.85.218.46]) by imf28.hostedemail.com (Postfix) with ESMTP id 8E4ABC0012 for ; Tue, 19 Nov 2024 20:54:41 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=gdTijcqI; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf28.hostedemail.com: domain of bjohannesmeyer@gmail.com designates 209.85.218.46 as permitted sender) smtp.mailfrom=bjohannesmeyer@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732049550; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=CXlybJu5m3KLhPLVoAXdtmN1aY6QneX+MU5tFhnG5vs=; b=joGGgWR8SQW31gCuQOk2oTijA6gwHbXiEkxz8QSxc/07Gr30ln0ovdPtBMVnzb98M6qXBl oECxQJw/KELCb2DPE1+/0MHBhip5DzseicaGnj0rGgOqpNBGFcKX1ejiELMXVfrAz2yGLz Ew9OTSQYv9F3Npp21yDl/s8dINHePj4= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=gdTijcqI; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf28.hostedemail.com: domain of bjohannesmeyer@gmail.com designates 209.85.218.46 as permitted sender) smtp.mailfrom=bjohannesmeyer@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732049550; a=rsa-sha256; cv=none; b=ky6MBxoBGTXvOoKFE5336ays5XRhGvqGOGJJQqdDXMpprTmblJ3FDKWgATfdRKEvSmkx45 IYzavwnBU5ORHagZgURe/kOdACWuz1ji7Nqv04jaI3EOcfqn406Q6pvv8+fZxx98wWq+S4 lqG5WLtLInJ27jWvTd/zFQ90LwLJ/Ls= Received: by mail-ej1-f46.google.com with SMTP id a640c23a62f3a-aa2099efdc3so36077666b.1 for ; Tue, 19 Nov 2024 12:55:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732049733; x=1732654533; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=CXlybJu5m3KLhPLVoAXdtmN1aY6QneX+MU5tFhnG5vs=; b=gdTijcqI3nvQTGr8rjDQbRt++ZNKlCZS+mahsBmVmqmkAsyPVNXi0C/0ILuVTRzrjs TocOnr2hk6TQTZSr//cdxQ+rLAPFRV7ZvkbDA9DmQsyJjn6fcgx/gIKB3RbdBeUNIlIU SozUZm7tyD8d9TcjKDqsRhN8XmRbos3Ze7NyhBS0qPn/kSrAOGzrXPEPOPrKK4ywlEQy GT/d/Nw8HnxDOaWNS+yNBtyqyzp8tEGvuf9Tr/XILyqB7eKjWTmSznL34YFa4n4DGO8z UM46SkqRYcMALSMmH+jTqHEsJAXtDjoie5H3kcytRg8GQnw0iHO/IyeWSPXxPZnZaXgQ Mhog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732049733; x=1732654533; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CXlybJu5m3KLhPLVoAXdtmN1aY6QneX+MU5tFhnG5vs=; b=vSe2YH73WpE7930UeuonG2gH2KMY3QNryA51on94DEo6LXqJisE4+NqoFUuEcHFF7L JEJR2VJW0xTUfo+/rYILryEwiv256W+Xaw3B4VeKKnC4XfydC2UvsixKqJFYt7PATnzv khKSM/1yA2rn9fEICOlwUUVzCuKmN6bsNafaI0fO5j4L3OVguebwFIkioST/XdrfdBKR x+WIlURz5+DhkCy+w7Xh1rWsi6NXfVXX4hFqKpng0ehjd6BV55gHOteMk84zFK2HP4/B F4dk0wEv5NVz3VUb9lC8Vwhd+gHXpJac0MRslL/OSzhaoZUhOPbGLvPQwL69fOhCh0sC RebA== X-Forwarded-Encrypted: i=1; AJvYcCV2p0cXxe7GEVokGRjD3YOZszNQWh/dnKcDFf+JX0oYBf59qan+gAUKBya4ZROVyyQPamcBi0+Szg==@kvack.org X-Gm-Message-State: AOJu0YyQvchNhw70F0vGPBDmK5+39ppqxsU9unl4uS3KFoYAFZCFn5cq 3oQC/AACqefqnXQM91ZiQSh7dBPEowe0XVov1lRMiVXf18BCVVsT X-Google-Smtp-Source: AGHT+IHvuiQ11gAkoKS3rCe97S3KuLMKzouT0A0cJvTiDOGIAHHuTbB508x/HbxPHwETNFjvQVZ23g== X-Received: by 2002:a17:907:a07:b0:a9e:df83:ba57 with SMTP id a640c23a62f3a-aa4c7ed4e08mr480546166b.22.1732049733039; Tue, 19 Nov 2024 12:55:33 -0800 (PST) Received: from rex.hwlab.vusec.net (lab-4.lab.cs.vu.nl. [192.33.36.4]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-aa20dfffc00sm694709266b.101.2024.11.19.12.55.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Nov 2024 12:55:32 -0800 (PST) From: Brian Johannesmeyer To: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-hardening@vger.kernel.org Cc: Brian Johannesmeyer , Raphael Isemann , Cristiano Giuffrida , Herbert Bos , Greg KH Subject: [RFC v2 1/2] dmapool: Move pool metadata into non-DMA memory Date: Tue, 19 Nov 2024 21:55:28 +0100 Message-Id: <20241119205529.3871048-2-bjohannesmeyer@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241119205529.3871048-1-bjohannesmeyer@gmail.com> References: <20241119205529.3871048-1-bjohannesmeyer@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 8E4ABC0012 X-Stat-Signature: 93aufdzwm3nu5ashopqzcteg4wj7kn4o X-Rspam-User: X-HE-Tag: 1732049681-624567 X-HE-Meta: U2FsdGVkX1+fzJ7WGR9UC6PtcdGTYAzmZ7yjCC788FuL57CmKALT9Pvy76SulJLWRNcEAuekHLnWvLbYj4N4ewtrIklp0tLFCLsAbJANhKXFM/g7cn8uoE+doQ45LtHol6sDFW8Q/8/B1Q0hNeZaW2YCx/RPE37zZc5pzAo9lLUW53gT8Ng/U0rJ/NxLGaVHKwtom0el5itEzh0L4mw9XnCwC/S1/ZbeW5Fd0jna0qTWLSIuxvPEugr5Yd6thiTC2HyqzdlDIpWenLRau09Ow5QOvTOetl1OFTCBmzhh5xwAle3ypCIzEMs1sGWRrs54kD2cF8JodohbiB90XLOSnh1avB2W+sQw5/djyeNAoo6Ns8LTFgw8Bd6nXWE8VMuBlyCryCRelSQBGITOwYd8MtSDabQGuljSsH5Zz3GXMPDaX7YHaDkRBdtEg9QmWDMYa8Y/1ZcxSxfnpgxAMgofd9GeCJSbDj08ZXFmDVubKQx3Gvb4CKFZ73lvDKaM9WpNWYsZZgenS+6ezdo2Ri2Aqz//xJX31L4O4TKRZtV+pRcZo+kiXEiSyp3nK5wkYtyXsyXDew/JjdP9vWEyf9MZPT6yzjeVs5wHGRqoHwrjA0cuGsjOU+mfH4AZos2RWFm4poS4JwPzswnZDruPKujgKC4aF0O0z5aoiSEx9wS7nEjarbKI+XqGumqui7ivyrd4u8pJoSnTifM8Po+tq/4MEsjIVT0WO+oEiiQHKyQ/c88U0uOiGCZ09IYtGqxuzt9kcKG3JjT8s/5yummQBcuINdhfPC2Qd4+fwWm33gRItaIdzAL3SePdS9z0T/suj5hn63GgCWwnEvEuC+1BdWNBuyjMpWDyIICrtpse65z8D2BmT2k7ekXB7yxU7yHRAHHhRm7OMuy2aJyXVcBDaOup6UT3myVuxIqRuzFvFGc78nDWroFWYF3su864IGgN+k0OmDE0hY7f8Bit4DM6HVy ZaRC0sJ7 p2pXIqGkuojg5kqC2+dCrpb7PPLTAvJs2FdNzY7Qd2i3363zQFhMI4yNa7doqyImgrGrJqOsKzpwNHydoR7/6tQHZ+0CnkYpOvWXxiNA5wr29lpW+uwcq+3nWwlQqONHDQbQqAr82mgm1C90tnhRGLZxzl6sp9W+t34fWbdT2ZvXg7hrz+tu9OmquCvtjm/X4/+Po89uEiAPs9dMzcK236eIdSlo21589FKrZvPQSQ6XsgyCqQRwMj2JQOcOMaZFGexkOR66enU+qzL6ImS4HPPJO/yeZTG5Rue9aZz8ju3J/TCwmyAd3t8jHWOchbO1LEM6YDYLNoftZe7GCSZ1DtflMrP9yNVSR5d0PUltVm3qKA0GdtFqdGF9baTRX9jV6pSZ5F2whQ7WMCn5OWOQ7ohnNCs9pZzJNCnr3 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: If a `struct dma_block` object resides in DMA memory, a malicious peripheral device can corrupt its metadata --- specifically, its `next_block` pointer, which links blocks in a DMA pool. By corrupting these pointers, an attacker can manipulate `dma_pool_alloc()` into returning attacker-controllable pointers, which can lead to kernel memory corruption from a driver that calls it. To prevent this, move the `struct dma_block` metadata into non-DMA memory, ensuring that devices cannot tamper with the internal pointers of the DMA pool allocator. Specifically: - Add a `vaddr` field to `struct dma_block` to point to the actual DMA-accessible block. - Maintain an array of `struct dma_block` objects in `struct dma_page` to track the metadata of each block within an allocated page. This change secures the DMA pool allocator by keeping its metadata in kernel memory, inaccessible to peripheral devices, thereby preventing potential attacks that could corrupt kernel memory through DMA operations. Co-developed-by: Raphael Isemann Signed-off-by: Raphael Isemann Signed-off-by: Brian Johannesmeyer --- mm/dmapool.c | 60 +++++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 52 insertions(+), 8 deletions(-) diff --git a/mm/dmapool.c b/mm/dmapool.c index a151a21e571b..25005a9fc201 100644 --- a/mm/dmapool.c +++ b/mm/dmapool.c @@ -43,6 +43,7 @@ struct dma_block { struct dma_block *next_block; dma_addr_t dma; + void *vaddr; }; struct dma_pool { /* the pool */ @@ -64,6 +65,8 @@ struct dma_page { /* cacheable header for 'allocation' bytes */ struct list_head page_list; void *vaddr; dma_addr_t dma; + struct dma_block *blocks; + size_t blocks_per_page; }; static DEFINE_MUTEX(pools_lock); @@ -91,14 +94,35 @@ static ssize_t pools_show(struct device *dev, struct device_attribute *attr, cha static DEVICE_ATTR_RO(pools); +static struct dma_block *pool_find_block(struct dma_pool *pool, void *vaddr) +{ + struct dma_page *page; + size_t offset, index; + + list_for_each_entry(page, &pool->page_list, page_list) { + if (vaddr < page->vaddr) + continue; + offset = vaddr - page->vaddr; + if (offset >= pool->allocation) + continue; + + index = offset / pool->size; + if (index >= page->blocks_per_page) + return NULL; + + return &page->blocks[index]; + } + return NULL; +} + #ifdef DMAPOOL_DEBUG static void pool_check_block(struct dma_pool *pool, struct dma_block *block, gfp_t mem_flags) { - u8 *data = (void *)block; + u8 *data = (void *)block->vaddr; int i; - for (i = sizeof(struct dma_block); i < pool->size; i++) { + for (i = 0; i < pool->size; i++) { if (data[i] == POOL_POISON_FREED) continue; dev_err(pool->dev, "%s %s, %p (corrupted)\n", __func__, @@ -114,7 +138,7 @@ static void pool_check_block(struct dma_pool *pool, struct dma_block *block, } if (!want_init_on_alloc(mem_flags)) - memset(block, POOL_POISON_ALLOCATED, pool->size); + memset(block->vaddr, POOL_POISON_ALLOCATED, pool->size); } static struct dma_page *pool_find_page(struct dma_pool *pool, dma_addr_t dma) @@ -143,7 +167,7 @@ static bool pool_block_err(struct dma_pool *pool, void *vaddr, dma_addr_t dma) } while (block) { - if (block != vaddr) { + if (block->vaddr != vaddr) { block = block->next_block; continue; } @@ -301,6 +325,7 @@ static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page) { unsigned int next_boundary = pool->boundary, offset = 0; struct dma_block *block, *first = NULL, *last = NULL; + size_t i = 0; pool_init_page(pool, page); while (offset + pool->size <= pool->allocation) { @@ -310,7 +335,8 @@ static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page) continue; } - block = page->vaddr + offset; + block = &page->blocks[i]; + block->vaddr = page->vaddr + offset; block->dma = page->dma + offset; block->next_block = NULL; @@ -322,6 +348,7 @@ static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page) offset += pool->size; pool->nr_blocks++; + i++; } last->next_block = pool->next_block; @@ -339,9 +366,18 @@ static struct dma_page *pool_alloc_page(struct dma_pool *pool, gfp_t mem_flags) if (!page) return NULL; + page->blocks_per_page = pool->allocation / pool->size; + page->blocks = kmalloc_array(page->blocks_per_page, + sizeof(struct dma_block), GFP_KERNEL); + if (!page->blocks) { + kfree(page); + return NULL; + } + page->vaddr = dma_alloc_coherent(pool->dev, pool->allocation, &page->dma, mem_flags); if (!page->vaddr) { + kfree(page->blocks); kfree(page); return NULL; } @@ -383,6 +419,7 @@ void dma_pool_destroy(struct dma_pool *pool) if (!busy) dma_free_coherent(pool->dev, pool->allocation, page->vaddr, page->dma); + kfree(page->blocks); list_del(&page->page_list); kfree(page); } @@ -432,9 +469,9 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags, *handle = block->dma; pool_check_block(pool, block, mem_flags); if (want_init_on_alloc(mem_flags)) - memset(block, 0, pool->size); + memset(block->vaddr, 0, pool->size); - return block; + return block->vaddr; } EXPORT_SYMBOL(dma_pool_alloc); @@ -449,9 +486,16 @@ EXPORT_SYMBOL(dma_pool_alloc); */ void dma_pool_free(struct dma_pool *pool, void *vaddr, dma_addr_t dma) { - struct dma_block *block = vaddr; + struct dma_block *block; unsigned long flags; + block = pool_find_block(pool, vaddr); + if (!block) { + dev_err(pool->dev, "%s %s, invalid vaddr %p\n", + __func__, pool->name, vaddr); + return; + } + spin_lock_irqsave(&pool->lock, flags); if (!pool_block_err(pool, vaddr, dma)) { pool_block_push(pool, block, dma);