From patchwork Mon Dec 10 01:15:02 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolas Boichat X-Patchwork-Id: 10720487 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C7B8C91E for ; Mon, 10 Dec 2018 01:15:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B5FCF29DB6 for ; Mon, 10 Dec 2018 01:15:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A9B6529E16; Mon, 10 Dec 2018 01:15:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 62E6229DB6 for ; Mon, 10 Dec 2018 01:15:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 85D3A8E0004; Sun, 9 Dec 2018 20:15:26 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8099E8E0001; Sun, 9 Dec 2018 20:15:26 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6D2E88E0004; Sun, 9 Dec 2018 20:15:26 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f198.google.com (mail-pl1-f198.google.com [209.85.214.198]) by kanga.kvack.org (Postfix) with ESMTP id 2B1428E0001 for ; Sun, 9 Dec 2018 20:15:26 -0500 (EST) Received: by mail-pl1-f198.google.com with SMTP id e68so7213692plb.3 for ; Sun, 09 Dec 2018 17:15:26 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=v53ohYgIFcKZc+mDiZ2r46NylU67jRR5KKW/neccYeA=; b=dLEk1TlPb7D0US7HpEpnfue48xBAsGGNyF1gGOnsq6mYKaAUaKzwXsnLZN/nLo3yFF 4/J10xZuvT8JUNz14cQDpz0gP8GBAUiaOxAZeQGJwT5QQxV2mkdSO3R+Q4F6FtZVE/ER WhfH5nYpaoK87zKWSlZfJxghwAYfw7iUNPRac38vJ+RPWjBaLwC05IzZshbOGO/Xn3yk liPPj1QijZCxy43vs65hEVg2jHQMWOwg37XUkiWl8tu70k2ic8JvWA0ohW3YJY4pY9as PK4Op1v99pC5DX9OcYkveDEO+bh2rgpXx47cBv4hhy2gOuLF/+4TfzyqDeXHk3sHf724 qevg== X-Gm-Message-State: AA+aEWa+sfcLy5ChVCIUoFuosvkY/4uR3hryJppd0qiBFrMO88sF2SfG WX2Bhp8Wo8ZeAemVoJxtDCH44DiQ+DOsRUhvOVAaQIfAXtKwV+4jZMrX0djEZm4BdgSgIm8DOeC F8bliss5viHnQRSNJv3KFFLY2tpOJUtSdwMlLoo7KKjU+zGn9Zo4HG6GrVHpfgxzS28wo4isTku h/FgSCC6hQPFwiaiAhnhvnLacKBshu+uMQWOiOVXH4bxCENCkqM1oF9a+pt7v1BEgvdbs5DHMON sFWNV54L3WU7PqTJAJjXpbKCuvH8nU8aFFIsAXEml9b5C0Sei1XzgWbVQkmfTDW05Jl/rcnV5Nd Be/yQOklFc886ft47B9p6MtyALmL75KrplgqihIxtlI1bue40O+uv0lGjWQOkZgNZy9qkSWVUny b X-Received: by 2002:a63:d252:: with SMTP id t18mr9273820pgi.133.1544404525692; Sun, 09 Dec 2018 17:15:25 -0800 (PST) X-Received: by 2002:a63:d252:: with SMTP id t18mr9273760pgi.133.1544404524529; Sun, 09 Dec 2018 17:15:24 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1544404524; cv=none; d=google.com; s=arc-20160816; b=chqxzqWCKnv2xgmQpKslCSS0QXyvZsjP0oS2PKdMft0iDPAxnJh5hR+vZC0x10A9XA YWyXlMPQWbt4f27j9x9Zce1aapvsQ1cdY+h41Y/RIy1sJrXL3X6CKjwKMc0DpBZQ2Jfk SDFmoMBo3aemH4nj3N6uKdsAKr4YSOupTVWCwnLdi8CaBaUC6Ni3Jo3HVR2LPB5J/qcA N/LKp79ReO7T9O+wfIb8OQT5PZorb7pQVVlP2/xKR6RZZL7nxYKbnSvskQOvoKnOhJUg iyfusTGdxgSrGbMzMjA4+4f5cISELI5L77VevXli5IHsto/072oGnlzV2qlt4RUMYsA8 FI3A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=v53ohYgIFcKZc+mDiZ2r46NylU67jRR5KKW/neccYeA=; b=AkU9BAR7trF/sQp2VrzerTWPTw9yXWD45CKids8jP/Qh4Ufx/zk6+YvTlaBvg2EwsP xf/PHF5wjFHawP6t/HnBpiDMj3sLBreucVmHBbQFEE3u2S0eiKCQF+sNf8Bd6NIDUR8w 3thBM1vSs/l8E7Gi/UX3Y7WRsFkuWuch9lGPzUw9VYOOB2/Bzqs8Ac7M7cHj/zedbwqw wVjiS0/bEBnfwNYDy5Zwa8WbhIDmcf+MS4htZevWJAkTaAGp+V96OnQzbFtPqo4f3Hcv wcMxURd/kvINnHF8Qia74nZFo3wbWoVYcni1nHhipi+FVrtj3P8GQHmDBWRNoJpTAJNp 3Bng== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=ijv+q4uF; spf=pass (google.com: domain of drinkcat@chromium.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=drinkcat@chromium.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id k196sor14426089pga.61.2018.12.09.17.15.24 for (Google Transport Security); Sun, 09 Dec 2018 17:15:24 -0800 (PST) Received-SPF: pass (google.com: domain of drinkcat@chromium.org designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=ijv+q4uF; spf=pass (google.com: domain of drinkcat@chromium.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=drinkcat@chromium.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=v53ohYgIFcKZc+mDiZ2r46NylU67jRR5KKW/neccYeA=; b=ijv+q4uFF8KDF+QVUFKj4U7gDtHtuEMw2oXVDmRjV9m90MMKozoV1SK3nr52Wfp0/J ajATJ4LwgXlkyBvI59+Xz/whZ5lXnl1xFMyxaVDC48lOag5w3FEwkupx/cXvgovw0NEw a4MdWfhyHdW1vGax/L6Sp9Y541uDVzZX/hMKs= X-Google-Smtp-Source: AFSGD/XQbPEpb5m4zF9JWDJwwHdQG1vS8O1vSqI9U4Qq0Ub4rwSWo9QtX4mH9YSjDkYW879bfhIqJw== X-Received: by 2002:a63:194f:: with SMTP id 15mr9299413pgz.192.1544404524069; Sun, 09 Dec 2018 17:15:24 -0800 (PST) Received: from drinkcat2.tpe.corp.google.com ([2401:fa00:1:b:f659:7f17:ea11:4e8e]) by smtp.gmail.com with ESMTPSA id b10sm16516336pfj.183.2018.12.09.17.15.19 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 09 Dec 2018 17:15:23 -0800 (PST) From: Nicolas Boichat To: Will Deacon Cc: Robin Murphy , Joerg Roedel , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Michal Hocko , Mel Gorman , Levin Alexander , Huaisheng Ye , Mike Rapoport , linux-arm-kernel@lists.infradead.org, iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Yong Wu , Matthias Brugger , Tomasz Figa , yingjoe.chen@mediatek.com, hch@infradead.org, Matthew Wilcox , hsinyi@chromium.org, stable@vger.kernel.org Subject: [PATCH v6 1/3] mm: Add support for kmem caches in DMA32 zone Date: Mon, 10 Dec 2018 09:15:02 +0800 Message-Id: <20181210011504.122604-2-drinkcat@chromium.org> X-Mailer: git-send-email 2.20.0.rc2.403.gdbc3b29805-goog In-Reply-To: <20181210011504.122604-1-drinkcat@chromium.org> References: <20181210011504.122604-1-drinkcat@chromium.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP IOMMUs using ARMv7 short-descriptor format require page tables to be allocated within the first 4GB of RAM, even on 64-bit systems. On arm64, this is done by passing GFP_DMA32 flag to memory allocation functions. For IOMMU L2 tables that only take 1KB, it would be a waste to allocate a full page using get_free_pages, so we considered 3 approaches: 1. This patch, adding support for GFP_DMA32 slab caches. 2. genalloc, which requires pre-allocating the maximum number of L2 page tables (4096, so 4MB of memory). 3. page_frag, which is not very memory-efficient as it is unable to reuse freed fragments until the whole page is freed. This change makes it possible to create a custom cache in DMA32 zone using kmem_cache_create, then allocate memory using kmem_cache_alloc. We do not create a DMA32 kmalloc cache array, as there are currently no users of kmalloc(..., GFP_DMA32). These calls will continue to trigger a warning, as we keep GFP_DMA32 in GFP_SLAB_BUG_MASK. This implies that calls to kmem_cache_*alloc on a SLAB_CACHE_DMA32 kmem_cache must _not_ use GFP_DMA32 (it is anyway redundant and unnecessary). Cc: stable@vger.kernel.org Signed-off-by: Nicolas Boichat Acked-by: Vlastimil Babka --- Changes since v2: - Clarified commit message - Add entry in sysfs-kernel-slab to document the new sysfs file (v3 used the page_frag approach) Changes since v4: - Added details to commit message - Dropped change that removed GFP_DMA32 from GFP_SLAB_BUG_MASK: instead we can just call kmem_cache_*alloc without GFP_DMA32 parameter. This also means that we can drop PATCH 1/3, as we do not make any changes in GFP flag verification. - Dropped hunks that added cache_dma32 sysfs file, and moved the hunks to PATCH 3/3, so that maintainer can decide whether to pick the change independently. (no change since v5) include/linux/slab.h | 2 ++ mm/slab.c | 2 ++ mm/slab.h | 3 ++- mm/slab_common.c | 2 +- mm/slub.c | 5 +++++ 5 files changed, 12 insertions(+), 2 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 11b45f7ae4057c..9449b19c5f107a 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -32,6 +32,8 @@ #define SLAB_HWCACHE_ALIGN ((slab_flags_t __force)0x00002000U) /* Use GFP_DMA memory */ #define SLAB_CACHE_DMA ((slab_flags_t __force)0x00004000U) +/* Use GFP_DMA32 memory */ +#define SLAB_CACHE_DMA32 ((slab_flags_t __force)0x00008000U) /* DEBUG: Store the last owner for bug hunting */ #define SLAB_STORE_USER ((slab_flags_t __force)0x00010000U) /* Panic if kmem_cache_create() fails */ diff --git a/mm/slab.c b/mm/slab.c index 73fe23e649c91a..124f8c556d27fb 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -2109,6 +2109,8 @@ int __kmem_cache_create(struct kmem_cache *cachep, slab_flags_t flags) cachep->allocflags = __GFP_COMP; if (flags & SLAB_CACHE_DMA) cachep->allocflags |= GFP_DMA; + if (flags & SLAB_CACHE_DMA32) + cachep->allocflags |= GFP_DMA32; if (flags & SLAB_RECLAIM_ACCOUNT) cachep->allocflags |= __GFP_RECLAIMABLE; cachep->size = size; diff --git a/mm/slab.h b/mm/slab.h index 4190c24ef0e9df..fcf717e12f0a86 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -127,7 +127,8 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size, /* Legal flag mask for kmem_cache_create(), for various configurations */ -#define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | SLAB_PANIC | \ +#define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | \ + SLAB_CACHE_DMA32 | SLAB_PANIC | \ SLAB_TYPESAFE_BY_RCU | SLAB_DEBUG_OBJECTS ) #if defined(CONFIG_DEBUG_SLAB) diff --git a/mm/slab_common.c b/mm/slab_common.c index 70b0cc85db67f8..18b7b809c8d064 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -53,7 +53,7 @@ static DECLARE_WORK(slab_caches_to_rcu_destroy_work, SLAB_FAILSLAB | SLAB_KASAN) #define SLAB_MERGE_SAME (SLAB_RECLAIM_ACCOUNT | SLAB_CACHE_DMA | \ - SLAB_ACCOUNT) + SLAB_CACHE_DMA32 | SLAB_ACCOUNT) /* * Merge control. If this is set then no merging of slab caches will occur. diff --git a/mm/slub.c b/mm/slub.c index c229a9b7dd5448..4caadb926838ef 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3583,6 +3583,9 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order) if (s->flags & SLAB_CACHE_DMA) s->allocflags |= GFP_DMA; + if (s->flags & SLAB_CACHE_DMA32) + s->allocflags |= GFP_DMA32; + if (s->flags & SLAB_RECLAIM_ACCOUNT) s->allocflags |= __GFP_RECLAIMABLE; @@ -5671,6 +5674,8 @@ static char *create_unique_id(struct kmem_cache *s) */ if (s->flags & SLAB_CACHE_DMA) *p++ = 'd'; + if (s->flags & SLAB_CACHE_DMA32) + *p++ = 'D'; if (s->flags & SLAB_RECLAIM_ACCOUNT) *p++ = 'a'; if (s->flags & SLAB_CONSISTENCY_CHECKS)