From patchwork Thu Dec 5 14:04:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 11274831 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C61186C1 for ; Thu, 5 Dec 2019 14:04:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 86D3D22525 for ; Thu, 5 Dec 2019 14:04:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=axtens.net header.i=@axtens.net header.b="TdxvDXVF" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 86D3D22525 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=axtens.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A9A6D6B1088; Thu, 5 Dec 2019 09:04:24 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A4A066B1089; Thu, 5 Dec 2019 09:04:24 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 939F86B108A; Thu, 5 Dec 2019 09:04:24 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0192.hostedemail.com [216.40.44.192]) by kanga.kvack.org (Postfix) with ESMTP id 7F7FF6B1088 for ; Thu, 5 Dec 2019 09:04:24 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 240A51812D6D5 for ; Thu, 5 Dec 2019 14:04:24 +0000 (UTC) X-FDA: 76231257648.18.knife09_6077fd20df712 X-Spam-Summary: 2,0,0,a18bff17643bd211,d41d8cd98f00b204,dja@axtens.net,:kasan-dev@googlegroups.com::aryabinin@virtuozzo.com:glider@google.com:linux-kernel@vger.kernel.org:dvyukov@google.com:daniel@iogearbox.net:cai@lca.pw:dja@axtens.net:syzbot+82e323920b78d54aaed5@syzkaller.appspotmail.com:syzbot+59b7daa4315e07a994f1@syzkaller.appspotmail.com,RULES_HIT:41:69:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1543:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:3874:4117:4321:4385:4605:5007:6261:6653:7875:8603:9010:9592:10004:11026:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12679:12683:12895:13172:13211:13221:13229:13894:14181:14394:14721:21063:21080:21444:21451:21627:21987:30054,0,RBL:209.85.216.65:@axtens.net:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none ,Custom_ X-HE-Tag: knife09_6077fd20df712 X-Filterd-Recvd-Size: 6803 Received: from mail-pj1-f65.google.com (mail-pj1-f65.google.com [209.85.216.65]) by imf26.hostedemail.com (Postfix) with ESMTP for ; Thu, 5 Dec 2019 14:04:22 +0000 (UTC) Received: by mail-pj1-f65.google.com with SMTP id l4so1346502pjt.5 for ; Thu, 05 Dec 2019 06:04:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=U+rYuarcFGCelyfsuzmoID8VbDpvPXs1LM8fcwC/sPE=; b=TdxvDXVFzR+61ypb+pkDPPMEjQCXeQyzL1jbq0HfNnf8R45+X/Tq1+GrRUfToFVMhr Zk+sUDBIx4cA3P46G2dqpvqsZdm7TNnQh1XTTXYyolmD+fjA6wViiPUD72sS0R4Xqa1e UG9/MvphoVnmZ8fE5uTs25ux3qgptunBk+NLU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=U+rYuarcFGCelyfsuzmoID8VbDpvPXs1LM8fcwC/sPE=; b=BPbPOfhgk9QZB2hw9raM05H2759+C5NRoRddE8DpuXn4XAMD2VCvwRlRJSFjapt8Rq c2ii6q03XW2Ypd3UONm1Ize/Q83JXiaIRnuqCT/tm3I7G15AtQczypAxEIRac3W/oh00 Z3SKRl373XKAwWK6+bbC/ESp0sSdASGaVZmsFZWBKFOrfoqhIP37ABCWi3sxHVDpd3zL klJfa6+SEPpvj6co/3ZBxRwIgSakU6hwLj8aWnE/VaCvwN9qVOgySVF5wJy17rEdgGdh VwMJzVNUbbelWfydWvyd2WqZo9tT3qoMpDmcWXCdpoK5eVM8AGWXDyw9Rk0d8e8ofbMw gmBA== X-Gm-Message-State: APjAAAXFdCsSPGy0G+y8zT5f3VmMtQaZ7GRQZfXPdM54NvXEz5GWrwW8 kBwbA6og0QxhMQ0bIc0/R1vYDQ== X-Google-Smtp-Source: APXvYqxVDhF8ueI6xXs6y+EV68T16oJerbdtvwULNuHUTWfYDLhkt+8b64AvbBxFH7IT4ww5pVig1w== X-Received: by 2002:a17:902:904b:: with SMTP id w11mr5268735plz.204.1575554661870; Thu, 05 Dec 2019 06:04:21 -0800 (PST) Received: from localhost (2001-44b8-111e-5c00-61b9-031c-bed1-3502.static.ipv6.internode.on.net. [2001:44b8:111e:5c00:61b9:31c:bed1:3502]) by smtp.gmail.com with ESMTPSA id q67sm5745928pjb.4.2019.12.05.06.04.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Dec 2019 06:04:21 -0800 (PST) From: Daniel Axtens To: kasan-dev@googlegroups.com, linux-mm@kvack.org, aryabinin@virtuozzo.com, glider@google.com, linux-kernel@vger.kernel.org, dvyukov@google.com Cc: daniel@iogearbox.net, cai@lca.pw, Daniel Axtens , syzbot+82e323920b78d54aaed5@syzkaller.appspotmail.com, syzbot+59b7daa4315e07a994f1@syzkaller.appspotmail.com Subject: [PATCH 3/3] kasan: don't assume percpu shadow allocations will succeed Date: Fri, 6 Dec 2019 01:04:07 +1100 Message-Id: <20191205140407.1874-3-dja@axtens.net> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191205140407.1874-1-dja@axtens.net> References: <20191205140407.1874-1-dja@axtens.net> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: syzkaller and the fault injector showed that I was wrong to assume that we could ignore percpu shadow allocation failures. Handle failures properly. Merge all the allocated areas back into the free list and release the shadow, then clean up and return NULL. The shadow is released unconditionally, which relies upon the fact that the release function is able to tolerate pages not being present. Also clean up shadows in the recovery path - currently they are not released, which leaks a bit of memory. Fixes: 3c5c3cfb9ef4 ("kasan: support backing vmalloc space with real shadow memory") Reported-by: syzbot+82e323920b78d54aaed5@syzkaller.appspotmail.com Reported-by: syzbot+59b7daa4315e07a994f1@syzkaller.appspotmail.com Cc: Dmitry Vyukov Cc: Andrey Ryabinin Signed-off-by: Daniel Axtens Reviewed-by: Andrey Ryabinin --- mm/vmalloc.c | 48 ++++++++++++++++++++++++++++++++++++++---------- 1 file changed, 38 insertions(+), 10 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 37af94b6cf30..fa5688093a88 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3291,7 +3291,7 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, struct vmap_area **vas, *va; struct vm_struct **vms; int area, area2, last_area, term_area; - unsigned long base, start, size, end, last_end; + unsigned long base, start, size, end, last_end, orig_start, orig_end; bool purged = false; enum fit_type type; @@ -3421,6 +3421,15 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, spin_unlock(&free_vmap_area_lock); + /* populate the kasan shadow space */ + for (area = 0; area < nr_vms; area++) { + if (kasan_populate_vmalloc(vas[area]->va_start, sizes[area])) + goto err_free_shadow; + + kasan_unpoison_vmalloc((void *)vas[area]->va_start, + sizes[area]); + } + /* insert all vm's */ spin_lock(&vmap_area_lock); for (area = 0; area < nr_vms; area++) { @@ -3431,13 +3440,6 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, } spin_unlock(&vmap_area_lock); - /* populate the shadow space outside of the lock */ - for (area = 0; area < nr_vms; area++) { - /* assume success here */ - kasan_populate_vmalloc(vas[area]->va_start, sizes[area]); - kasan_unpoison_vmalloc((void *)vms[area]->addr, sizes[area]); - } - kfree(vas); return vms; @@ -3449,8 +3451,12 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, * and when pcpu_get_vm_areas() is success. */ while (area--) { - merge_or_add_vmap_area(vas[area], &free_vmap_area_root, - &free_vmap_area_list); + orig_start = vas[area]->va_start; + orig_end = vas[area]->va_end; + va = merge_or_add_vmap_area(vas[area], &free_vmap_area_root, + &free_vmap_area_list); + kasan_release_vmalloc(orig_start, orig_end, + va->va_start, va->va_end); vas[area] = NULL; } @@ -3485,6 +3491,28 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, kfree(vas); kfree(vms); return NULL; + +err_free_shadow: + spin_lock(&free_vmap_area_lock); + /* + * We release all the vmalloc shadows, even the ones for regions that + * hadn't been successfully added. This relies on kasan_release_vmalloc + * being able to tolerate this case. + */ + for (area = 0; area < nr_vms; area++) { + orig_start = vas[area]->va_start; + orig_end = vas[area]->va_end; + va = merge_or_add_vmap_area(vas[area], &free_vmap_area_root, + &free_vmap_area_list); + kasan_release_vmalloc(orig_start, orig_end, + va->va_start, va->va_end); + vas[area] = NULL; + kfree(vms[area]); + } + spin_unlock(&free_vmap_area_lock); + kfree(vas); + kfree(vms); + return NULL; } /**