From patchwork Wed Aug 29 11:35:08 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 10580025 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2C62417DE for ; Wed, 29 Aug 2018 11:37:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1B9B42AB44 for ; Wed, 29 Aug 2018 11:37:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0E9202ABDA; Wed, 29 Aug 2018 11:37:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, USER_IN_DEF_DKIM_WL autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A978D2AB44 for ; Wed, 29 Aug 2018 11:37:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728472AbeH2Pds (ORCPT ); Wed, 29 Aug 2018 11:33:48 -0400 Received: from mail-wm0-f65.google.com ([74.125.82.65]:38184 "EHLO mail-wm0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728451AbeH2PcE (ORCPT ); Wed, 29 Aug 2018 11:32:04 -0400 Received: by mail-wm0-f65.google.com with SMTP id t25-v6so5153628wmi.3 for ; Wed, 29 Aug 2018 04:35:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Oho6JdHma26yH9i0K5sb+l4Yb9Bpt1LjR+gOl5dkBUo=; b=SaDmWZ9fK2PewT8oJonwMmFfr8WhrgkwKC1k18mnNmJmAsCXOB3epwtFKQCa9bJOAi mEj8+WOJp2nZjwSiGScw6Xf5tBPmT1MpzndcX+Y+Ct0cNtljF9Ux6NW+JhEZSOSdgrxU yvYeNx0zRXThLLrsda7sUxBeiipmmCaRpWw3EfQPRM0nzAAWxcYpVEaZsynW76N+kdWG uuyyKkgeP3GLgqci7gdynMTo3MsxqG1n2vZYlR2lP32Vi0+iyNDuLjgbLH4od9gAkPMz p3j/Y1rMM2RUu+jDYRBgy/F6h8F3cZXPVHFrtsUqaArAhctqLIcRmQaMiHZ5DTmF+235 K9Dg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Oho6JdHma26yH9i0K5sb+l4Yb9Bpt1LjR+gOl5dkBUo=; b=eSA9pLQxS9eaPR+eZ+2d3l+Ub7M+SsETqs9Rs9RaLUtJ7fHsgQessJ2o9yKw+C+Jhq WTFb8o/Y1OaSU2Jw1InwPvAEyELaj/QUDa6jjyitV6ug5uKcm4pMUlNYu34+9QyTaqRa 8QWL8T9RP+CtiPjgvA6jt81WLBUlEDsLJer0NzoqQNl/iFIFRiyOqEWVFzCLoEKW2/kZ Of3o37LZiHzsq/YP9k2cEDs0sV5EX3Ovshy+QV3CI77Vf1lXq1TlNFgSa330/DxTAMhc QHsT+fckMW5IWWD//WdbH/fCV8ycoGrFl9vkqeZF02eewQhWFBs1UkFsGXDUMOK+62Y/ SEqA== X-Gm-Message-State: APzg51CqSj/8J4o0KTVZPWUISrKjO+z8LtbJhz0MHsm538NV96uCntHI mjkQEAvFXOXa8XYAbuRMQ2NN9g== X-Google-Smtp-Source: ANB0Vda/VxaGGcnpqyl9kXA7HPIGNrg+oVCYhSMZPHQ7NBy/4oLuVzESIGyzYz5syIS6IPDHguMu6g== X-Received: by 2002:a1c:7fc6:: with SMTP id a189-v6mr4102021wmd.42.1535542534055; Wed, 29 Aug 2018 04:35:34 -0700 (PDT) Received: from andreyknvl0.muc.corp.google.com ([2a00:79e0:15:10:84be:a42a:826d:c530]) by smtp.gmail.com with ESMTPSA id s10-v6sm7800454wmd.22.2018.08.29.04.35.32 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 29 Aug 2018 04:35:33 -0700 (PDT) From: Andrey Konovalov To: Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Catalin Marinas , Will Deacon , Christoph Lameter , Andrew Morton , Mark Rutland , Nick Desaulniers , Marc Zyngier , Dave Martin , Ard Biesheuvel , "Eric W . Biederman" , Ingo Molnar , Paul Lawrence , Geert Uytterhoeven , Arnd Bergmann , "Kirill A . Shutemov" , Greg Kroah-Hartman , Kate Stewart , Mike Rapoport , kasan-dev@googlegroups.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-sparse@vger.kernel.org, linux-mm@kvack.org, linux-kbuild@vger.kernel.org Cc: Kostya Serebryany , Evgeniy Stepanov , Lee Smith , Ramana Radhakrishnan , Jacob Bramley , Ruben Ayrapetyan , Jann Horn , Mark Brand , Chintan Pandya , Vishwath Mohan , Andrey Konovalov Subject: [PATCH v6 04/18] khwasan, arm64: adjust shadow size for CONFIG_KASAN_HW Date: Wed, 29 Aug 2018 13:35:08 +0200 Message-Id: X-Mailer: git-send-email 2.19.0.rc0.228.g281dcd1b4d0-goog In-Reply-To: References: MIME-Version: 1.0 Sender: linux-kbuild-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP KWHASAN uses 1 shadow byte for 16 bytes of kernel memory, so it requires 1/16th of the kernel virtual address space for the shadow memory. This commit sets KASAN_SHADOW_SCALE_SHIFT to 4 when KHWASAN is enabled. Signed-off-by: Andrey Konovalov --- arch/arm64/Makefile | 2 +- arch/arm64/include/asm/memory.h | 13 +++++++++---- 2 files changed, 10 insertions(+), 5 deletions(-) diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile index 106039d25e2f..17047b8ab984 100644 --- a/arch/arm64/Makefile +++ b/arch/arm64/Makefile @@ -94,7 +94,7 @@ endif # KASAN_SHADOW_OFFSET = VA_START + (1 << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT)) # - (1 << (64 - KASAN_SHADOW_SCALE_SHIFT)) # in 32-bit arithmetic -KASAN_SHADOW_SCALE_SHIFT := 3 +KASAN_SHADOW_SCALE_SHIFT := $(if $(CONFIG_KASAN_HW), 4, 3) KASAN_SHADOW_OFFSET := $(shell printf "0x%08x00000000\n" $$(( \ (0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 32))) \ + (1 << ($(CONFIG_ARM64_VA_BITS) - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) \ diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index b96442960aea..f5e262ee76c1 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -74,12 +74,17 @@ #define KERNEL_END _end /* - * KASAN requires 1/8th of the kernel virtual address space for the shadow - * region. KASAN can bloat the stack significantly, so double the (minimum) - * stack size when KASAN is in use. + * KASAN and KHWASAN require 1/8th and 1/16th of the kernel virtual address + * space for the shadow region respectively. They can bloat the stack + * significantly, so double the (minimum) stack size when they are in use. */ -#ifdef CONFIG_KASAN +#ifdef CONFIG_KASAN_GENERIC #define KASAN_SHADOW_SCALE_SHIFT 3 +#endif +#ifdef CONFIG_KASAN_HW +#define KASAN_SHADOW_SCALE_SHIFT 4 +#endif +#ifdef CONFIG_KASAN #define KASAN_SHADOW_SIZE (UL(1) << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT)) #define KASAN_THREAD_SHIFT 1 #else