From patchwork Wed Jan 27 09:50:19 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 8131741 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 76C96BEEE5 for ; Wed, 27 Jan 2016 09:54:05 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 7CAFF20270 for ; Wed, 27 Jan 2016 09:54:04 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 955372025B for ; Wed, 27 Jan 2016 09:54:03 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1aOMm7-0007nP-A5; Wed, 27 Jan 2016 09:52:31 +0000 Received: from mail-wm0-x236.google.com ([2a00:1450:400c:c09::236]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1aOMkV-0006Av-LC for linux-arm-kernel@lists.infradead.org; Wed, 27 Jan 2016 09:50:54 +0000 Received: by mail-wm0-x236.google.com with SMTP id r129so137850346wmr.0 for ; Wed, 27 Jan 2016 01:50:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id; bh=ddzYOzZWLka71oAfxNX6EX18RQ7HiIvkiGx7cWvdwZE=; b=JoIU5GL2oJnZmRI1RA5T7RIqcHjEObAIpNY5R8JPVqebV+HZOfGgbAil74echCWHIb IkmjHsn6wwVAmAi9dgf87evApK4yVBdzP+HolFKjDrDBqjFWnK6LDGuAb/6wncyclinN uBzdLs50LjkEHYnt56IfRkWOQVsCkL0OoObMw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=ddzYOzZWLka71oAfxNX6EX18RQ7HiIvkiGx7cWvdwZE=; b=GC4e3FaXKDWBcffXat6EORg0JYvl9sBqSZeWCswpIr9vQ2PdzwTctQcmCDf9ig4pIP L3Byo07zxOx7InBy7Y4qnnrxfPe9hZJHMgFxvchQu0mXXyTqXHJY8Ulqc7CU9ZbM2yZX 9Ye9/FOgiPbwUFGbuhXrMzFowgIw23vf1dmtBx6whEv1Go4KrZZibgZj1xOKFh/sXbuP 70yhr11tb2TvJyIGI3dkow8a9b/5tlsrh+PwalpOO2BxVPnmcuOB7eenEOXsiS/E2N9x v+boOIOYD6aYGJUDxnYiRBLqFgUGy+XGxqq9cdl/3GaVstRndYfSkXjfEdAoxjAo9bBu 1I5A== X-Gm-Message-State: AG10YOTJtz7N1UaIB/DxvyOnuuZ+Xz6+vOFrTputAj22foTe8+BnoJYT/fY9OlcphbicrVjV X-Received: by 10.28.16.78 with SMTP id 75mr30256729wmq.82.1453888230082; Wed, 27 Jan 2016 01:50:30 -0800 (PST) Received: from localhost.localdomain ([195.55.142.58]) by smtp.gmail.com with ESMTPSA id q6sm5357444wja.19.2016.01.27.01.50.28 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 27 Jan 2016 01:50:29 -0800 (PST) From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org, will.deacon@arm.com, catalin.marinas@arm.com, mark.rutland@arm.com Subject: [PATCH v2] arm64: allow vmalloc regions to be set with set_memory_* Date: Wed, 27 Jan 2016 10:50:19 +0100 Message-Id: <1453888219-17695-1-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.5.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160127_015052_048166_366743BD X-CRM114-Status: GOOD ( 14.38 ) X-Spam-Score: -2.7 (--) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Ard Biesheuvel , labbott@fedoraproject.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The range of set_memory_* is currently restricted to the module address range because of difficulties in breaking down larger block sizes. vmalloc maps PAGE_SIZE pages so it is safe to use as well. Update the function ranges and add a comment explaining why the range is restricted the way it is. Suggested-by: Laura Abbott Acked-by: Mark Rutland Signed-off-by: Ard Biesheuvel --- v2: reorder #includes, add Mark's ack arch/arm64/mm/pageattr.c | 23 ++++++++++++++++---- 1 file changed, 19 insertions(+), 4 deletions(-) diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index cf6240741134..0795c3a36d8f 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include @@ -44,6 +45,7 @@ static int change_memory_common(unsigned long addr, int numpages, unsigned long end = start + size; int ret; struct page_change_data data; + struct vm_struct *area; if (!PAGE_ALIGNED(addr)) { start &= PAGE_MASK; @@ -51,10 +53,23 @@ static int change_memory_common(unsigned long addr, int numpages, WARN_ON_ONCE(1); } - if (start < MODULES_VADDR || start >= MODULES_END) - return -EINVAL; - - if (end < MODULES_VADDR || end >= MODULES_END) + /* + * Kernel VA mappings are always live, and splitting live section + * mappings into page mappings may cause TLB conflicts. This means + * we have to ensure that changing the permission bits of the range + * we are operating on does not result in such splitting. + * + * Let's restrict ourselves to mappings created by vmalloc (or vmap). + * Those are guaranteed to consist entirely of page mappings, and + * splitting is never needed. + * + * So check whether the [addr, addr + size) interval is entirely + * covered by precisely one VM area that has the VM_ALLOC flag set. + */ + area = find_vm_area((void *)addr); + if (!area || + end > (unsigned long)area->addr + area->size || + !(area->flags & VM_ALLOC)) return -EINVAL; if (!numpages)