From patchwork Wed Dec 18 04:51:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11299517 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E36FA138C for ; Wed, 18 Dec 2019 04:51:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A681D218AC for ; Wed, 18 Dec 2019 04:51:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="hjs+E8VD" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A681D218AC Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C2A098E00D3; Tue, 17 Dec 2019 23:51:48 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id BD9B18E0079; Tue, 17 Dec 2019 23:51:48 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AEF3A8E00D3; Tue, 17 Dec 2019 23:51:48 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0230.hostedemail.com [216.40.44.230]) by kanga.kvack.org (Postfix) with ESMTP id 947118E0079 for ; Tue, 17 Dec 2019 23:51:48 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 3A6CA8249980 for ; Wed, 18 Dec 2019 04:51:48 +0000 (UTC) X-FDA: 76277039496.07.bit19_67048bcd5f752 X-Spam-Summary: 2,0,0,6531a684f0bc545f,d41d8cd98f00b204,akpm@linux-foundation.org,:akpm@linux-foundation.org:aryabinin@virtuozzo.com:cai@lca.pw:dja@axtens.net:dvyukov@google.com:glider@google.com::mm-commits@vger.kernel.org:torvalds@linux-foundation.org:urezki@gmail.com,RULES_HIT:41:355:379:800:960:966:967:968:973:988:989:1260:1263:1345:1359:1381:1431:1437:1535:1544:1711:1730:1747:1777:1792:1981:2194:2196:2199:2200:2393:2525:2559:2563:2682:2685:2859:2902:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3354:3865:3867:3870:3871:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4117:4321:4385:4605:5007:6261:6653:7208:7514:7576:7875:8599:8603:8660:9025:9163:9545:10004:10913:11026:11233:11473:11658:11914:12043:12048:12114:12297:12438:12517:12519:12555:12679:12783:12986:13148:13221:13229:13230:13255:13846:13870:14096:14181:14721:14849:21080:21433:21451:21627:21939:21966:21987:30029:30054,0,RBL:error,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0 ,MSF:not X-HE-Tag: bit19_67048bcd5f752 X-Filterd-Recvd-Size: 6010 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf45.hostedemail.com (Postfix) with ESMTP for ; Wed, 18 Dec 2019 04:51:47 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id D94A921582; Wed, 18 Dec 2019 04:51:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1576644707; bh=m2crQceWyobNMxE9Lt7zpPxt7w4/ZANTU8zSumI1QVA=; h=Date:From:To:Subject:In-Reply-To:From; b=hjs+E8VDXp/r2ckP/EFbaqLqturjLXKZBtINa5MuYAnBhA8EgW9fhFntPYnKQB7bK O8wu5dKPuaItYnRjX4IeEfDh0EL99t7ZJEfIcf/dvaNVXzcsx0q4ebh0UE8TloCJkr umckvO6Nk5+BCCEWM8cML2igN0XpxHPNmjqpfiQE= Date: Tue, 17 Dec 2019 20:51:46 -0800 From: Andrew Morton To: akpm@linux-foundation.org, aryabinin@virtuozzo.com, cai@lca.pw, dja@axtens.net, dvyukov@google.com, glider@google.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, urezki@gmail.com Subject: [patch 3/6] kasan: use apply_to_existing_page_range() for releasing vmalloc shadow Message-ID: <20191218045146.MXsRxaBsb%akpm@linux-foundation.org> In-Reply-To: <20191217205020.6e2eaefc78710ec646e99aa9@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Daniel Axtens Subject: kasan: use apply_to_existing_page_range() for releasing vmalloc shadow kasan_release_vmalloc uses apply_to_page_range to release vmalloc shadow. Unfortunately, apply_to_page_range can allocate memory to fill in page table entries, which is not what we want. Also, kasan_release_vmalloc is called under free_vmap_area_lock, so if apply_to_page_range does allocate memory, we get a sleep in atomic bug: BUG: sleeping function called from invalid context at mm/page_alloc.c:46= 81 in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 15087, name: Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x199/0x216 lib/dump_stack.c:118 ___might_sleep.cold.97+0x1f5/0x238 kernel/sched/core.c:6800 __might_sleep+0x95/0x190 kernel/sched/core.c:6753 prepare_alloc_pages mm/page_alloc.c:4681 [inline] __alloc_pages_nodemask+0x3cd/0x890 mm/page_alloc.c:4730 alloc_pages_current+0x10c/0x210 mm/mempolicy.c:2211 alloc_pages include/linux/gfp.h:532 [inline] __get_free_pages+0xc/0x40 mm/page_alloc.c:4786 __pte_alloc_one_kernel include/asm-generic/pgalloc.h:21 [inline] pte_alloc_one_kernel include/asm-generic/pgalloc.h:33 [inline] __pte_alloc_kernel+0x1d/0x200 mm/memory.c:459 apply_to_pte_range mm/memory.c:2031 [inline] apply_to_pmd_range mm/memory.c:2068 [inline] apply_to_pud_range mm/memory.c:2088 [inline] apply_to_p4d_range mm/memory.c:2108 [inline] apply_to_page_range+0x77d/0xa00 mm/memory.c:2133 kasan_release_vmalloc+0xa7/0xc0 mm/kasan/common.c:970 __purge_vmap_area_lazy+0xcbb/0x1f30 mm/vmalloc.c:1313 try_purge_vmap_area_lazy mm/vmalloc.c:1332 [inline] free_vmap_area_noflush+0x2ca/0x390 mm/vmalloc.c:1368 free_unmap_vmap_area mm/vmalloc.c:1381 [inline] remove_vm_area+0x1cc/0x230 mm/vmalloc.c:2209 vm_remove_mappings mm/vmalloc.c:2236 [inline] __vunmap+0x223/0xa20 mm/vmalloc.c:2299 __vfree+0x3f/0xd0 mm/vmalloc.c:2356 __vmalloc_area_node mm/vmalloc.c:2507 [inline] __vmalloc_node_range+0x5d5/0x810 mm/vmalloc.c:2547 __vmalloc_node mm/vmalloc.c:2607 [inline] __vmalloc_node_flags mm/vmalloc.c:2621 [inline] vzalloc+0x6f/0x80 mm/vmalloc.c:2666 alloc_one_pg_vec_page net/packet/af_packet.c:4233 [inline] alloc_pg_vec net/packet/af_packet.c:4258 [inline] packet_set_ring+0xbc0/0x1b50 net/packet/af_packet.c:4342 packet_setsockopt+0xed7/0x2d90 net/packet/af_packet.c:3695 __sys_setsockopt+0x29b/0x4d0 net/socket.c:2117 __do_sys_setsockopt net/socket.c:2133 [inline] __se_sys_setsockopt net/socket.c:2130 [inline] __x64_sys_setsockopt+0xbe/0x150 net/socket.c:2130 do_syscall_64+0xfa/0x780 arch/x86/entry/common.c:294 entry_SYSCALL_64_after_hwframe+0x49/0xbe Switch to using the apply_to_existing_page_range() helper instead, which won't allocate memory. [akpm@linux-foundation.org: s/apply_to_existing_pages/apply_to_existing_page_range/] Link: http://lkml.kernel.org/r/20191205140407.1874-2-dja@axtens.net Fixes: 3c5c3cfb9ef4 ("kasan: support backing vmalloc space with real shad= ow memory") Signed-off-by: Daniel Axtens Reported-by: Dmitry Vyukov Reviewed-by: Andrey Ryabinin Cc: Alexander Potapenko Cc: Qian Cai Cc: Uladzislau Rezki (Sony) Signed-off-by: Andrew Morton --- mm/kasan/common.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) --- a/mm/kasan/common.c~kasan-use-apply_to_existing_pages-for-releasing-vmalloc-shadow +++ a/mm/kasan/common.c @@ -957,6 +957,7 @@ void kasan_release_vmalloc(unsigned long { void *shadow_start, *shadow_end; unsigned long region_start, region_end; + unsigned long size; region_start = ALIGN(start, PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE); region_end = ALIGN_DOWN(end, PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE); @@ -979,9 +980,11 @@ void kasan_release_vmalloc(unsigned long shadow_end = kasan_mem_to_shadow((void *)region_end); if (shadow_end > shadow_start) { - apply_to_page_range(&init_mm, (unsigned long)shadow_start, - (unsigned long)(shadow_end - shadow_start), - kasan_depopulate_vmalloc_pte, NULL); + size = shadow_end - shadow_start; + apply_to_existing_page_range(&init_mm, + (unsigned long)shadow_start, + size, kasan_depopulate_vmalloc_pte, + NULL); flush_tlb_kernel_range((unsigned long)shadow_start, (unsigned long)shadow_end); }