From patchwork Wed Sep 19 18:54:44 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 10606217 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D4FE817E1 for ; Wed, 19 Sep 2018 18:55:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C63DB291B2 for ; Wed, 19 Sep 2018 18:55:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B987F2BBFC; Wed, 19 Sep 2018 18:55:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, USER_IN_DEF_DKIM_WL autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6C0A42B840 for ; Wed, 19 Sep 2018 18:55:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387476AbeITAe3 (ORCPT ); Wed, 19 Sep 2018 20:34:29 -0400 Received: from mail-wr1-f68.google.com ([209.85.221.68]:46620 "EHLO mail-wr1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387454AbeITAe2 (ORCPT ); Wed, 19 Sep 2018 20:34:28 -0400 Received: by mail-wr1-f68.google.com with SMTP id a108-v6so6855980wrc.13 for ; Wed, 19 Sep 2018 11:55:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=WcXjOX/25U5olMrelAjRtQalbkfy2CfWqZfIuu8HMaY=; b=Iy7aGVBOvNjRzJTMfIM9jvoanw1p4F9qG39Ci75WPpNEx5WlKRhOBs+1ubk8fINpW3 6ur1SEXAWClreRmdxheH3xAYipMoT+LKxG6m2Lw3j91u2prq5DE205aQ/G4GN1OrKXaO zZrekhwgCZBz97AQsO57KjGQc5C4q6S3y92c/u6Q3RCrc/p0+3RMvLfa6b1Y3vmYhzbK PzJ8cYGEJZn6fSHdut0g7KAjSnpHNufUAXwFFpsrp7v9mNW8WH4smg3vQnJhfrfIfwQZ bZJZWOneyEHU4nyh6XB82Xes8h+4Zwz8+bSy5UeRBELaqYWtTxNFY13dp6W+gJ8+oWUL X3Xw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WcXjOX/25U5olMrelAjRtQalbkfy2CfWqZfIuu8HMaY=; b=aSABIWHqp5mPvgNOVDV+CESboTKD0zO/gpnjQg2ZtbQ+EcmX8LUQGPWM4MN4rtz9zZ lnWHYMH1aExMpCMCL7muFS8nhGmb8K8Mfs7FEUMTWtUkIBODgOLE67IufXOZkA7sdx6Z aXJmhCuEgfzx1EB/LcAopjydXRbBAk8x5/CLtPY+WGn+ZOwRqAI55SnkTOe1G4orVAEn rxD8JcfJQp2FcJ+fXqF4vSzwslny4Id+9OqG3JCmT6YxyiaN0jj10yQ1ipah/L1osL1W 4ityeivpHFQq4MtNTr/vw2o9r8lOjebgicEn6YhsbrHUHaQyTM20sNFrS8kI6LywwBWW 2QHA== X-Gm-Message-State: ABuFfogLgAq2UobS2wdj826z5JT0O5b7hKEGHz1+iK7ki8C3YFQtRZyb l5Sfc3SCzBMWJZR5K9IqLhW1/w== X-Google-Smtp-Source: ACcGV62ncZEsDHCPcaqbfKEUhaeWdWVi+9st0BcKxK2qyZSTgeHH4bT7gZ1IH8gFvskTSZisaMSQrw== X-Received: by 2002:adf:9e92:: with SMTP id a18-v6mr439691wrf.70.1537383311952; Wed, 19 Sep 2018 11:55:11 -0700 (PDT) Received: from andreyknvl0.muc.corp.google.com ([2a00:79e0:15:10:84be:a42a:826d:c530]) by smtp.gmail.com with ESMTPSA id b10-v6sm8510065wmc.28.2018.09.19.11.55.10 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 19 Sep 2018 11:55:11 -0700 (PDT) From: Andrey Konovalov To: Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Catalin Marinas , Will Deacon , Christoph Lameter , Andrew Morton , Mark Rutland , Nick Desaulniers , Marc Zyngier , Dave Martin , Ard Biesheuvel , "Eric W . Biederman" , Ingo Molnar , Paul Lawrence , Geert Uytterhoeven , Arnd Bergmann , "Kirill A . Shutemov" , Greg Kroah-Hartman , Kate Stewart , Mike Rapoport , kasan-dev@googlegroups.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-sparse@vger.kernel.org, linux-mm@kvack.org, linux-kbuild@vger.kernel.org Cc: Kostya Serebryany , Evgeniy Stepanov , Lee Smith , Ramana Radhakrishnan , Jacob Bramley , Ruben Ayrapetyan , Jann Horn , Mark Brand , Chintan Pandya , Vishwath Mohan , Andrey Konovalov Subject: [PATCH v8 05/20] kasan, arm64: adjust shadow size for tag-based mode Date: Wed, 19 Sep 2018 20:54:44 +0200 Message-Id: X-Mailer: git-send-email 2.19.0.397.gdd90340f6a-goog In-Reply-To: References: MIME-Version: 1.0 Sender: linux-kbuild-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Tag-based KASAN uses 1 shadow byte for 16 bytes of kernel memory, so it requires 1/16th of the kernel virtual address space for the shadow memory. This commit sets KASAN_SHADOW_SCALE_SHIFT to 4 when the tag-based KASAN mode is enabled. Signed-off-by: Andrey Konovalov --- arch/arm64/Makefile | 2 +- arch/arm64/include/asm/memory.h | 13 +++++++++---- 2 files changed, 10 insertions(+), 5 deletions(-) diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile index 106039d25e2f..11f4750d8d41 100644 --- a/arch/arm64/Makefile +++ b/arch/arm64/Makefile @@ -94,7 +94,7 @@ endif # KASAN_SHADOW_OFFSET = VA_START + (1 << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT)) # - (1 << (64 - KASAN_SHADOW_SCALE_SHIFT)) # in 32-bit arithmetic -KASAN_SHADOW_SCALE_SHIFT := 3 +KASAN_SHADOW_SCALE_SHIFT := $(if $(CONFIG_KASAN_SW_TAGS), 4, 3) KASAN_SHADOW_OFFSET := $(shell printf "0x%08x00000000\n" $$(( \ (0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 32))) \ + (1 << ($(CONFIG_ARM64_VA_BITS) - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) \ diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index b96442960aea..0f1e024a951f 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -74,12 +74,17 @@ #define KERNEL_END _end /* - * KASAN requires 1/8th of the kernel virtual address space for the shadow - * region. KASAN can bloat the stack significantly, so double the (minimum) - * stack size when KASAN is in use. + * Generic and tag-based KASAN require 1/8th and 1/16th of the kernel virtual + * address space for the shadow region respectively. They can bloat the stack + * significantly, so double the (minimum) stack size when they are in use. */ -#ifdef CONFIG_KASAN +#ifdef CONFIG_KASAN_GENERIC #define KASAN_SHADOW_SCALE_SHIFT 3 +#endif +#ifdef CONFIG_KASAN_SW_TAGS +#define KASAN_SHADOW_SCALE_SHIFT 4 +#endif +#ifdef CONFIG_KASAN #define KASAN_SHADOW_SIZE (UL(1) << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT)) #define KASAN_THREAD_SHIFT 1 #else