From patchwork Thu Dec 30 19:14:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: andrey.konovalov@linux.dev X-Patchwork-Id: 12701682 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DCE65C433F5 for ; Thu, 30 Dec 2021 19:33:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=7SlwjVQlObwVV2h4XBa9cdX3fLq0+ueBWgPwFCNEjdg=; b=t6cwe9dU4AEPlu 2x/oj/wJXF/l5V1nSCb/rtXxL0BQVk++WZM3ATagh8edKkQfCGXAQFljORsQE3zMM420dIuA2pz37 ZzW+MhXSOX0/CY4o54aqrl7FDwrzkHCGYVNeSJKiI6COJWRMWqhi3tkA+oqVX0FHdTFBXiNvv5HTT w2odPf7wBHjRMh+x2ouHf4galrelDBsNiTpWi7/9BJAHtrXZWkcQ1IVGZXAY479SmmkAcex8y0Ofz eFx56ZjXjx7BybcGUbrdN/Jbi/mhK7xKya3AbKa+S17m/RYTtrcGt7SBd9dvCqNHuVN7ntGip3qgi FZfRwd+1CT2/EtnHw0zQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1n319u-0052eL-Gy; Thu, 30 Dec 2021 19:32:19 +0000 Received: from out2.migadu.com ([188.165.223.204]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1n30tt-004vJ1-S1 for linux-arm-kernel@lists.infradead.org; Thu, 30 Dec 2021 19:15:47 +0000 X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1640891744; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=FABrWA5761iiZUkXnIazPnhUDwA8MsEm5+WPei3KGi8=; b=J9RmHsI724Wyz2OZ3fhPnYnVKm0959DLfvaYXHBpdTS5m0zqSZbW4B7U6rxzuFmV1QmKvD aHLpQkjOWPCTipkenXm4679cUEgEoqjz4tFE73XwRqBc3np4Iqfi6TZhKZ0qaF8o7ljYlQ 60I8ogDgKnMUkj8eT7NoH+YfUdXKISY= From: andrey.konovalov@linux.dev To: Andrew Morton Cc: Andrey Konovalov , Marco Elver , Alexander Potapenko , Dmitry Vyukov , Andrey Ryabinin , kasan-dev@googlegroups.com, linux-mm@kvack.org, Vincenzo Frascino , Catalin Marinas , Will Deacon , Mark Rutland , linux-arm-kernel@lists.infradead.org, Peter Collingbourne , Evgenii Stepanov , linux-kernel@vger.kernel.org, Andrey Konovalov Subject: [PATCH mm v5 26/39] kasan, vmalloc: unpoison VM_ALLOC pages after mapping Date: Thu, 30 Dec 2021 20:14:51 +0100 Message-Id: <2aec888039eb8e7f9bd8c1f8bb289081f0136e60.1640891329.git.andreyknvl@google.com> In-Reply-To: References: MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Migadu-Auth-User: linux.dev X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211230_111546_118716_396E4514 X-CRM114-Status: GOOD ( 17.82 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Andrey Konovalov Make KASAN unpoison vmalloc mappings after they have been mapped in when it's possible: for vmalloc() (indentified via VM_ALLOC) and vm_map_ram(). The reasons for this are: - For vmalloc() and vm_map_ram(): pages don't get unpoisoned in case mapping them fails. - For vmalloc(): HW_TAGS KASAN needs pages to be mapped to set tags via kasan_unpoison_vmalloc(). As a part of these changes, the return value of __vmalloc_node_range() is changed to area->addr. This is a non-functional change, as __vmalloc_area_node() returns area->addr anyway. Signed-off-by: Andrey Konovalov Reviewed-by: Alexander Potapenko --- Changes v3->v4: - Don't forget to save tagged addr to vm_struct->addr for VM_ALLOC so that find_vm_area(addr)->addr == addr for vmalloc(). - Reword comments. - Update patch description. Changes v2->v3: - Update patch description. --- mm/vmalloc.c | 30 ++++++++++++++++++++++-------- 1 file changed, 22 insertions(+), 8 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 598bb65263c7..bcf973a54737 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2210,14 +2210,15 @@ void *vm_map_ram(struct page **pages, unsigned int count, int node) mem = (void *)addr; } - mem = kasan_unpoison_vmalloc(mem, size); - if (vmap_pages_range(addr, addr + size, PAGE_KERNEL, pages, PAGE_SHIFT) < 0) { vm_unmap_ram(mem, count); return NULL; } + /* Mark the pages as accessible, now that they are mapped. */ + mem = kasan_unpoison_vmalloc(mem, size); + return mem; } EXPORT_SYMBOL(vm_map_ram); @@ -2445,7 +2446,14 @@ static struct vm_struct *__get_vm_area_node(unsigned long size, setup_vmalloc_vm(area, va, flags, caller); - area->addr = kasan_unpoison_vmalloc(area->addr, requested_size); + /* + * Mark pages for non-VM_ALLOC mappings as accessible. Do it now as a + * best-effort approach, as they can be mapped outside of vmalloc code. + * For VM_ALLOC mappings, the pages are marked as accessible after + * getting mapped in __vmalloc_node_range(). + */ + if (!(flags & VM_ALLOC)) + area->addr = kasan_unpoison_vmalloc(area->addr, requested_size); return area; } @@ -3054,7 +3062,7 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, const void *caller) { struct vm_struct *area; - void *addr; + void *ret; unsigned long real_size = size; unsigned long real_align = align; unsigned int shift = PAGE_SHIFT; @@ -3116,10 +3124,13 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, prot = arch_vmap_pgprot_tagged(prot); /* Allocate physical pages and map them into vmalloc space. */ - addr = __vmalloc_area_node(area, gfp_mask, prot, shift, node); - if (!addr) + ret = __vmalloc_area_node(area, gfp_mask, prot, shift, node); + if (!ret) goto fail; + /* Mark the pages as accessible, now that they are mapped. */ + area->addr = kasan_unpoison_vmalloc(area->addr, real_size); + /* * In this function, newly allocated vm_struct has VM_UNINITIALIZED * flag. It means that vm_struct is not fully initialized. @@ -3131,7 +3142,7 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, if (!(vm_flags & VM_DEFER_KMEMLEAK)) kmemleak_vmalloc(area, size, gfp_mask); - return addr; + return area->addr; fail: if (shift > PAGE_SHIFT) { @@ -3823,7 +3834,10 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, } spin_unlock(&vmap_area_lock); - /* mark allocated areas as accessible */ + /* + * Mark allocated areas as accessible. Do it now as a best-effort + * approach, as they can be mapped outside of vmalloc code. + */ for (area = 0; area < nr_vms; area++) vms[area]->addr = kasan_unpoison_vmalloc(vms[area]->addr, vms[area]->size);