From patchwork Fri Apr 5 21:42:51 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laura Abbott X-Patchwork-Id: 2400611 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) by patchwork1.kernel.org (Postfix) with ESMTP id 3FDA43FD8C for ; Fri, 5 Apr 2013 21:43:19 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1UOEPZ-0005S9-5n; Fri, 05 Apr 2013 21:43:05 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1UOEPW-00025F-I6; Fri, 05 Apr 2013 21:43:02 +0000 Received: from wolverine02.qualcomm.com ([199.106.114.251]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1UOEPT-00024r-9x for linux-arm-kernel@lists.infradead.org; Fri, 05 Apr 2013 21:43:00 +0000 X-IronPort-AV: E=Sophos;i="4.87,417,1363158000"; d="scan'208";a="36308362" Received: from pdmz-ns-snip_115_219.qualcomm.com (HELO mostmsg01.qualcomm.com) ([199.106.115.219]) by wolverine02.qualcomm.com with ESMTP/TLS/DHE-RSA-AES256-SHA; 05 Apr 2013 14:42:56 -0700 Received: from lauraa-linux.qualcomm.com (pdmz-ns-snip_218_1.qualcomm.com [192.168.218.1]) by mostmsg01.qualcomm.com (Postfix) with ESMTPA id 2EDEA10004AB; Fri, 5 Apr 2013 14:42:56 -0700 (PDT) From: Laura Abbott To: linux-arm-kernel@lists.infradead.org, Russell King , Nicolas Pitre Subject: [RFC][PATCH] arm: highmem: Add support for flushing kmap_atomic mappings Date: Fri, 5 Apr 2013 14:42:51 -0700 Message-Id: <1365198171-8469-1-git-send-email-lauraa@codeaurora.org> X-Mailer: git-send-email 1.7.8.3 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20130405_174259_536703_E4A95348 X-CRM114-Status: GOOD ( 14.11 ) X-Spam-Score: -4.2 (----) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-4.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at http://www.dnswl.org/, medium trust [199.106.114.251 listed in list.dnswl.org] -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] Cc: linux-arm-msm@vger.kernel.org, Laura Abbott X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org The highmem code provides kmap_flush_unused to ensure all kmap mappings are really removed if they are used. This code does not handle kmap_atomic mappings since they are managed separately. This prevents an issue for any code which relies on having absolutely no mappings for a particular page. Rather than pay the penalty of having CONFIG_DEBUG_HIGHMEM on all the time, add functionality to remove the kmap_atomic mappings in a similar way to kmap_flush_unused. This is intended to be an RFC to make sure this approach is reasonable. The goal is to have kmap_atomic_flush_unused be a public API. Signed-off-by: Laura Abbott --- arch/arm/mm/highmem.c | 51 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 51 insertions(+), 0 deletions(-) diff --git a/arch/arm/mm/highmem.c b/arch/arm/mm/highmem.c index 21b9e1b..f4c0466 100644 --- a/arch/arm/mm/highmem.c +++ b/arch/arm/mm/highmem.c @@ -10,6 +10,7 @@ * published by the Free Software Foundation. */ +#include #include #include #include @@ -135,3 +136,53 @@ struct page *kmap_atomic_to_page(const void *ptr) return pte_page(get_top_pte(vaddr)); } + +static void kmap_remove_unused_cpu(int cpu) +{ + int start_idx, idx, type; + + pagefault_disable(); + type = kmap_atomic_idx(); + start_idx = type + 1 + KM_TYPE_NR * cpu; + + for (idx = start_idx; idx < KM_TYPE_NR + KM_TYPE_NR * cpu; idx++) { + unsigned long vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); + set_top_pte(vaddr, __pte(0)); + } + pagefault_enable(); +} + +static void kmap_remove_unused(void *unused) +{ + kmap_remove_unused_cpu(smp_processor_id()); +} + +void kmap_atomic_flush_unused(void) +{ + on_each_cpu(kmap_remove_unused, NULL, 1); +} + +static int hotplug_kmap_atomic_callback(struct notifier_block *nfb, + unsigned long action, void *hcpu) +{ + switch (action & (~CPU_TASKS_FROZEN)) { + case CPU_DYING: + kmap_remove_unused_cpu((int)hcpu); + break; + default: + break; + } + + return NOTIFY_OK; +} + +static struct notifier_block hotplug_kmap_atomic_notifier = { + .notifier_call = hotplug_kmap_atomic_callback, +}; + +static int __init init_kmap_atomic(void) +{ + + return register_hotcpu_notifier(&hotplug_kmap_atomic_notifier); +} +early_initcall(init_kmap_atomic);