From patchwork Thu Jun 17 08:13:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 12327005 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62154C2B9F4 for ; Thu, 17 Jun 2021 08:13:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E57D4613D5 for ; Thu, 17 Jun 2021 08:13:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E57D4613D5 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=axtens.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8655B6B0071; Thu, 17 Jun 2021 04:13:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 83C676B0072; Thu, 17 Jun 2021 04:13:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6B60E6B0073; Thu, 17 Jun 2021 04:13:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0024.hostedemail.com [216.40.44.24]) by kanga.kvack.org (Postfix) with ESMTP id 3A76D6B0071 for ; Thu, 17 Jun 2021 04:13:37 -0400 (EDT) Received: from smtpin33.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id B3A92181AC9CB for ; Thu, 17 Jun 2021 08:13:36 +0000 (UTC) X-FDA: 78262501632.33.457E421 Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) by imf22.hostedemail.com (Postfix) with ESMTP id D00C9C001C7C for ; Thu, 17 Jun 2021 08:13:24 +0000 (UTC) Received: by mail-pl1-f180.google.com with SMTP id e1so2505662plh.8 for ; Thu, 17 Jun 2021 01:13:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=OTHwuB2mCohnRjYeuyJYonCC4hSvfxMb+kfGOcySdeA=; b=N15/0qJHRaxdK+1VySkk0d0jIwo8SlPuin0WQQVvHBYrySoP+yRym1fJMbOfreAfef eqM1NJ+tuludBCFBDNl9haZ3F20WnIecBcjiOE3/BP9zneTMZaHYo7HtMyQTs6fel3gO KKm4HAwC+QN7k+z6FGJQ/GviG3Uq5j6pvanCc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=OTHwuB2mCohnRjYeuyJYonCC4hSvfxMb+kfGOcySdeA=; b=XT0aeVWrCROsQmmGoejyl7qwkcy0TRVo9EsJ544JhfIzavTcvt6+4jkUo7YIFlzPx+ 48rADrcwoVWRjtvIYREeeT+MVu2ACViiVKrvxgMSkPz2i091agmGOFQS41cJYSQ1rTrL BoOB/Q1OcWtP90Ue9o5UvAuyxB/EupALYTpU5zzNAzUp3aM9+Dig73fbCiO1zopmAMvr WpMNpxMdjd0MWfycwJclJCKFxQxNaTyYv7mN/MS3y8wqzwZF7GyvnHHh62pLVY3Hunhk U1sNLTyc+qzCuxgI3i4W0rHxFGxBs9lMGc4t3o0CCl3DzKOL1LoNGYqXfRgByCBDPo4H ThqQ== X-Gm-Message-State: AOAM532VRcDnxTmE6KvPUs7aROKP2rJJ+COpxyVYYEIjEMq3VqekIVu9 UBv12htay8DY/cetTEteO476qw== X-Google-Smtp-Source: ABdhPJwnn9p/GSeDbeOv7fhw7LPunrrnAI/UcB7rcCrlRnWpmZKWDDtr0tQ2fyoT7txyc+fa4A88LQ== X-Received: by 2002:a17:902:748c:b029:103:267f:a2b3 with SMTP id h12-20020a170902748cb0290103267fa2b3mr3483380pll.23.1623917615084; Thu, 17 Jun 2021 01:13:35 -0700 (PDT) Received: from localhost ([203.206.29.204]) by smtp.gmail.com with ESMTPSA id z6sm4623868pgs.24.2021.06.17.01.13.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 17 Jun 2021 01:13:34 -0700 (PDT) From: Daniel Axtens To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, kasan-dev@googlegroups.com, akpm@linux-foundation.org Cc: Daniel Axtens , Nicholas Piggin , David Gow , Dmitry Vyukov , Andrey Konovalov , Uladzislau Rezki Subject: [PATCH] mm/vmalloc: unbreak kasan vmalloc support Date: Thu, 17 Jun 2021 18:13:30 +1000 Message-Id: <20210617081330.98629-1-dja@axtens.net> X-Mailer: git-send-email 2.30.2 MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: D00C9C001C7C Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=axtens.net header.s=google header.b="N15/0qJH"; spf=pass (imf22.hostedemail.com: domain of dja@axtens.net designates 209.85.214.180 as permitted sender) smtp.mailfrom=dja@axtens.net; dmarc=none X-Stat-Signature: 7z7zgrb9d1tqrh7km89kbkcr6f1gzrim X-HE-Tag: 1623917604-208933 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In commit 121e6f3258fe ("mm/vmalloc: hugepage vmalloc mappings"), __vmalloc_node_range was changed such that __get_vm_area_node was no longer called with the requested/real size of the vmalloc allocation, but rather with a rounded-up size. This means that __get_vm_area_node called kasan_unpoision_vmalloc() with a rounded up size rather than the real size. This led to it allowing access to too much memory and so missing vmalloc OOBs and failing the kasan kunit tests. Pass the real size and the desired shift into __get_vm_area_node. This allows it to round up the size for the underlying allocators while still unpoisioning the correct quantity of shadow memory. Adjust the other call-sites to pass in PAGE_SHIFT for the shift value. Cc: Nicholas Piggin Cc: David Gow Cc: Dmitry Vyukov Cc: Andrey Konovalov Cc: Uladzislau Rezki (Sony) Link: https://bugzilla.kernel.org/show_bug.cgi?id=213335 Fixes: 121e6f3258fe ("mm/vmalloc: hugepage vmalloc mappings") Signed-off-by: Daniel Axtens Tested-by: David Gow Reviewed-by: Nicholas Piggin Reviewed-by: Uladzislau Rezki (Sony) Tested-by: Andrey Konovalov Acked-by: Andrey Konovalov --- mm/vmalloc.c | 24 ++++++++++++++---------- 1 file changed, 14 insertions(+), 10 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index aaad569e8963..3471cbeb083c 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2362,15 +2362,16 @@ static void clear_vm_uninitialized_flag(struct vm_struct *vm) } static struct vm_struct *__get_vm_area_node(unsigned long size, - unsigned long align, unsigned long flags, unsigned long start, - unsigned long end, int node, gfp_t gfp_mask, const void *caller) + unsigned long align, unsigned long shift, unsigned long flags, + unsigned long start, unsigned long end, int node, + gfp_t gfp_mask, const void *caller) { struct vmap_area *va; struct vm_struct *area; unsigned long requested_size = size; BUG_ON(in_interrupt()); - size = PAGE_ALIGN(size); + size = ALIGN(size, 1ul << shift); if (unlikely(!size)) return NULL; @@ -2402,8 +2403,8 @@ struct vm_struct *__get_vm_area_caller(unsigned long size, unsigned long flags, unsigned long start, unsigned long end, const void *caller) { - return __get_vm_area_node(size, 1, flags, start, end, NUMA_NO_NODE, - GFP_KERNEL, caller); + return __get_vm_area_node(size, 1, PAGE_SHIFT, flags, start, end, + NUMA_NO_NODE, GFP_KERNEL, caller); } /** @@ -2419,7 +2420,8 @@ struct vm_struct *__get_vm_area_caller(unsigned long size, unsigned long flags, */ struct vm_struct *get_vm_area(unsigned long size, unsigned long flags) { - return __get_vm_area_node(size, 1, flags, VMALLOC_START, VMALLOC_END, + return __get_vm_area_node(size, 1, PAGE_SHIFT, flags, + VMALLOC_START, VMALLOC_END, NUMA_NO_NODE, GFP_KERNEL, __builtin_return_address(0)); } @@ -2427,7 +2429,8 @@ struct vm_struct *get_vm_area(unsigned long size, unsigned long flags) struct vm_struct *get_vm_area_caller(unsigned long size, unsigned long flags, const void *caller) { - return __get_vm_area_node(size, 1, flags, VMALLOC_START, VMALLOC_END, + return __get_vm_area_node(size, 1, PAGE_SHIFT, flags, + VMALLOC_START, VMALLOC_END, NUMA_NO_NODE, GFP_KERNEL, caller); } @@ -2949,9 +2952,9 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, } again: - size = PAGE_ALIGN(size); - area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED | - vm_flags, start, end, node, gfp_mask, caller); + area = __get_vm_area_node(real_size, align, shift, VM_ALLOC | + VM_UNINITIALIZED | vm_flags, start, end, node, + gfp_mask, caller); if (!area) { warn_alloc(gfp_mask, NULL, "vmalloc error: size %lu, vm_struct allocation failed", @@ -2970,6 +2973,7 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, */ clear_vm_uninitialized_flag(area); + size = PAGE_ALIGN(size); kmemleak_vmalloc(area, size, gfp_mask); return addr;