From patchwork Tue Sep 15 21:16:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 11777767 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 523C86CA for ; Tue, 15 Sep 2020 21:17:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0A7B92078E for ; Tue, 15 Sep 2020 21:17:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="OPHSGMgy" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0A7B92078E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 56B5B90008E; Tue, 15 Sep 2020 17:17:27 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 5420990008A; Tue, 15 Sep 2020 17:17:27 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 42ED690008E; Tue, 15 Sep 2020 17:17:27 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0092.hostedemail.com [216.40.44.92]) by kanga.kvack.org (Postfix) with ESMTP id 2817390008A for ; Tue, 15 Sep 2020 17:17:27 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id E6B5D180AD807 for ; Tue, 15 Sep 2020 21:17:26 +0000 (UTC) X-FDA: 77266556892.19.base40_320800327114 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id BB2141AD1B1 for ; Tue, 15 Sep 2020 21:17:26 +0000 (UTC) X-Spam-Summary: 1,0,0,1f92a5b817f3b139,d41d8cd98f00b204,3zs9hxwokceomzp3qawz7xs00sxq.o0yxuz69-yyw7mow.03s@flex--andreyknvl.bounces.google.com,,RULES_HIT:41:69:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1544:1593:1594:1711:1730:1747:1777:1792:2393:2559:2562:2693:3138:3139:3140:3141:3142:3152:3355:3865:3867:3868:3871:3872:3874:4118:4250:4321:4605:5007:6261:6653:6742:7576:7903:9036:9969:10004:11026:11232:11473:11657:11658:11914:12043:12048:12296:12297:12438:12555:12679:12895:13149:13161:13180:13229:13230:13972:14181:14394:14659:14721:21063:21080:21365:21433:21444:21451:21627:21795:21990:30051:30054:30070,0,RBL:209.85.222.201:@flex--andreyknvl.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100;04yf7mau1cfty7mu9x8g6hd5fhgmaocgprbxnrd48dq1wo6epqnn9xiqakbxpgq.xitp3y7a95hr6pbb1f46qwhrzw94ug7mwznx8rguynqswbytftsjn6pnrw37mj7.n-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,M SF:not b X-HE-Tag: base40_320800327114 X-Filterd-Recvd-Size: 7801 Received: from mail-qk1-f201.google.com (mail-qk1-f201.google.com [209.85.222.201]) by imf06.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Sep 2020 21:17:26 +0000 (UTC) Received: by mail-qk1-f201.google.com with SMTP id m23so4079433qkh.10 for ; Tue, 15 Sep 2020 14:17:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=LoMLrgCJ4NCLa6lDkl4/AfQRlbckNXKPTR9s0h+8nto=; b=OPHSGMgyTOEYfNPCqFtxP8N6XSZ6nxthp788gfaQ6LZK88+NP4G9p8EfPPO64vxsMI 1Vp+LwO8Ch8ajFcrNwd8R/XStc+zaNO399CyV8mmtSNtCE9oUIixxEu685oh+92ui+x0 KbAP5RApmPxXLCNI3A/nPLpPQafWdGNAXJPXruFJ68rI+OQej/eb3gAUyXcc/NurYUOQ 9r0CIjJc6n5UatLdS8F96jkE+QhbQcVnS+6Q6PA6qsVY8mpL1iUib6dCdh79f60Vi7t5 u7KNZOzWlUTFEIH80ecHOhmW0gkQgt0jqxCSW8AtwiJLtXVHpndqXBWyDQeJvuen3TFn f+Vw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=LoMLrgCJ4NCLa6lDkl4/AfQRlbckNXKPTR9s0h+8nto=; b=ISfBieUvMbAFqXP44jiAU6EJyDmTfXVbjQxosaiCE9VLZCXq5fL9w/wMblRzxxl+bE KYHhkgxUpI/qpvF6BGY+bG1OgZg61dvzKmc+4DvpjhsptFFRi30aehVGSzeAxSHdfI+r pRDi9SRzS/g+lRKqjpNWkdG6BfBOCnaPZf12PMAi5Qkw3kRe+VFnyqFe5pGRUq3Wfl/Z ZhxPSB3jFBjkcbIVmVqGGIvnnRjcrFh8QSy+5HWwZmnza1M/hnczFOStjO+lpAVwfXEL Embgi8tGyKDb0v9dgZsrl7Q8+riwlM2AFA3uF2IuUTAVTi3A9PybBZvwMfVBQ61Wt3yD 9wCQ== X-Gm-Message-State: AOAM530qrDXMR5RAe0jgrooP+pucHsI0SbSshwuwnHZfn2mIQFuGR5N/ sN/1QaaElph1zvUeDVTovDZkbH64v/rTVAEW X-Google-Smtp-Source: ABdhPJxmY0qJcpgV1DP7goE8DV5QQTLm7VX7KUqDAy2WnXdtmug7VK++0MBCbCzt8X4HcHNViflvrcMh5ai1FDcA X-Received: from andreyknvl3.muc.corp.google.com ([2a00:79e0:15:13:7220:84ff:fe09:7e9d]) (user=andreyknvl job=sendgmr) by 2002:a0c:b21b:: with SMTP id x27mr20134857qvd.12.1600204645596; Tue, 15 Sep 2020 14:17:25 -0700 (PDT) Date: Tue, 15 Sep 2020 23:16:08 +0200 In-Reply-To: Message-Id: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.28.0.618.gf4bc123cb7-goog Subject: [PATCH v2 26/37] arm64: mte: Convert gcr_user into an exclude mask From: Andrey Konovalov To: Dmitry Vyukov , Vincenzo Frascino , Catalin Marinas , kasan-dev@googlegroups.com Cc: Andrey Ryabinin , Alexander Potapenko , Marco Elver , Evgenii Stepanov , Elena Petrova , Branislav Rankov , Kevin Brodsky , Will Deacon , Andrew Morton , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov X-Rspamd-Queue-Id: BB2141AD1B1 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Vincenzo Frascino The gcr_user mask is a per thread mask that represents the tags that are excluded from random generation when the Memory Tagging Extension is present and an 'irg' instruction is invoked. gcr_user affects the behavior on EL0 only. Currently that mask is an include mask and it is controlled by the user via prctl() while GCR_EL1 accepts an exclude mask. Convert the include mask into an exclude one to make it easier the register setting. Note: This change will affect gcr_kernel (for EL1) introduced with a future patch. Signed-off-by: Vincenzo Frascino Signed-off-by: Andrey Konovalov Reviewed-by: Catalin Marinas --- Change-Id: Id15c0b47582fb51594bb26fb8353d78c7d0953c1 --- arch/arm64/include/asm/processor.h | 2 +- arch/arm64/kernel/mte.c | 29 +++++++++++++++-------------- 2 files changed, 16 insertions(+), 15 deletions(-) diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index fec204d28fce..ed9efa5be8eb 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -153,7 +153,7 @@ struct thread_struct { #endif #ifdef CONFIG_ARM64_MTE u64 sctlr_tcf0; - u64 gcr_user_incl; + u64 gcr_user_excl; #endif }; diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index e238ffde2679..858e75cfcaa0 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -140,23 +140,22 @@ static void set_sctlr_el1_tcf0(u64 tcf0) preempt_enable(); } -static void update_gcr_el1_excl(u64 incl) +static void update_gcr_el1_excl(u64 excl) { - u64 excl = ~incl & SYS_GCR_EL1_EXCL_MASK; /* - * Note that 'incl' is an include mask (controlled by the user via - * prctl()) while GCR_EL1 accepts an exclude mask. + * Note that the mask controlled by the user via prctl() is an + * include while GCR_EL1 accepts an exclude mask. * No need for ISB since this only affects EL0 currently, implicit * with ERET. */ sysreg_clear_set_s(SYS_GCR_EL1, SYS_GCR_EL1_EXCL_MASK, excl); } -static void set_gcr_el1_excl(u64 incl) +static void set_gcr_el1_excl(u64 excl) { - current->thread.gcr_user_incl = incl; - update_gcr_el1_excl(incl); + current->thread.gcr_user_excl = excl; + update_gcr_el1_excl(excl); } void flush_mte_state(void) @@ -171,7 +170,7 @@ void flush_mte_state(void) /* disable tag checking */ set_sctlr_el1_tcf0(SCTLR_EL1_TCF0_NONE); /* reset tag generation mask */ - set_gcr_el1_excl(0); + set_gcr_el1_excl(SYS_GCR_EL1_EXCL_MASK); } void mte_thread_switch(struct task_struct *next) @@ -182,7 +181,7 @@ void mte_thread_switch(struct task_struct *next) /* avoid expensive SCTLR_EL1 accesses if no change */ if (current->thread.sctlr_tcf0 != next->thread.sctlr_tcf0) update_sctlr_el1_tcf0(next->thread.sctlr_tcf0); - update_gcr_el1_excl(next->thread.gcr_user_incl); + update_gcr_el1_excl(next->thread.gcr_user_excl); } void mte_suspend_exit(void) @@ -190,13 +189,14 @@ void mte_suspend_exit(void) if (!system_supports_mte()) return; - update_gcr_el1_excl(current->thread.gcr_user_incl); + update_gcr_el1_excl(current->thread.gcr_user_excl); } long set_mte_ctrl(struct task_struct *task, unsigned long arg) { u64 tcf0; - u64 gcr_incl = (arg & PR_MTE_TAG_MASK) >> PR_MTE_TAG_SHIFT; + u64 gcr_excl = ~((arg & PR_MTE_TAG_MASK) >> PR_MTE_TAG_SHIFT) & + SYS_GCR_EL1_EXCL_MASK; if (!system_supports_mte()) return 0; @@ -217,10 +217,10 @@ long set_mte_ctrl(struct task_struct *task, unsigned long arg) if (task != current) { task->thread.sctlr_tcf0 = tcf0; - task->thread.gcr_user_incl = gcr_incl; + task->thread.gcr_user_excl = gcr_excl; } else { set_sctlr_el1_tcf0(tcf0); - set_gcr_el1_excl(gcr_incl); + set_gcr_el1_excl(gcr_excl); } return 0; @@ -229,11 +229,12 @@ long set_mte_ctrl(struct task_struct *task, unsigned long arg) long get_mte_ctrl(struct task_struct *task) { unsigned long ret; + u64 incl = ~task->thread.gcr_user_excl & SYS_GCR_EL1_EXCL_MASK; if (!system_supports_mte()) return 0; - ret = task->thread.gcr_user_incl << PR_MTE_TAG_SHIFT; + ret = incl << PR_MTE_TAG_SHIFT; switch (task->thread.sctlr_tcf0) { case SCTLR_EL1_TCF0_NONE: