From patchwork Fri Jun 14 03:07:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Viresh Kumar X-Patchwork-Id: 10994149 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 557B014C0 for ; Fri, 14 Jun 2019 03:16:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 45294209CD for ; Fri, 14 Jun 2019 03:16:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3983727D16; Fri, 14 Jun 2019 03:16:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 74EFA209CD for ; Fri, 14 Jun 2019 03:16:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=DVtvxeu/VP0ep/BQySSAOwXcOE5xsL0da8ioB4t7j7c=; b=MHKjDqOA/Fp8AX 25XuuYW8LtMrdJy/6a232ATTs2euaBnQBMAY4BhYIQSTW67zi4HwF/N8/b7bIgtRYk2rPCPD9M173 kRxV+XJLv0ORaGAwE99QRJKSkmho5Nf/QI0SDM9BOlyoqp9txPJLL+rhgNZy2AmRHec4CO5cq7bXN XYvOOU8NoBNrKA+kEysTJb9PNVo0VkgDqJWq3xM0qez3ppX9H18xgvvMvMhcNf7O07ioQYTm12HuH ECcnEMXkkfl0hhlEgUXLbfQ+ehmG8mOaaT+7evv4FCPbYOvS/vqg1jQF+tVSmSgzCioPDVlMI0DIM w4YADX920kd4M4nB8O/A==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hbchr-0005Dx-TI; Fri, 14 Jun 2019 03:16:47 +0000 Received: from mail-pl1-x643.google.com ([2607:f8b0:4864:20::643]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1hbcdf-0000K8-AO for linux-arm-kernel@lists.infradead.org; Fri, 14 Jun 2019 03:12:32 +0000 Received: by mail-pl1-x643.google.com with SMTP id s24so372080plr.8 for ; Thu, 13 Jun 2019 20:12:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Zvf6a5n0b57m5571dEwc9PlwLEZ1HE7kAfPBhGFdWe4=; b=ztvXyBjEuIE8pRhNLIo38p951l7799Fiu7539y9i4bXyR/9SLopeN0wi3Hg5qaRSLa ontrmxklVAi69ytzxsCIlhosEi5TQM5i0ZbphTZ/PIieeXhXmLnu8OQwxBRxsWL7eTtU 0AkgT9ikRbzufBmBfhyY+cJRiMEILdWX8f4H5pt0EzsfgqUHMS3vRN8YOsXwTAHnmQpW jcczzZglVslkF2CFIjncy2tZjLVuZ3MWdhv9UhkGE5Pyl3au4XwJ16LowVI6BhB2+tGZ 20kj/YxMMXGaRx6L56183shTe7LoBdu+DjE2eJ8CY2Q17zp4Rrq6ktyXeAn5orsiyZF+ n9WA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Zvf6a5n0b57m5571dEwc9PlwLEZ1HE7kAfPBhGFdWe4=; b=CVRiqSPsuodgDhK8BGH+TY20WyRCCahPnjS1AuO32Bpxi52XOUPq7W1nCmEYK3m8u7 ehEo3BD7tt6XU5bsjaULskrndUys8EwsIS/RHE+hvxYqB1zJuBt2KHAMXoOGOHrCbPrG vOYvWbJoH20V1H5ntfy/uZ+oGqLSuojw8imQc3LnTGCl1aWagfyZ2y7IwyFfBawSIGeG OvV87Cqk0jskYJcIrZjQEpGWOG/kfcHkfH6VueObvIt8Co9aLK1dnCkh0eWm/PwMXjsd 8f2ZnHxZQg5dhOdDQL/OFe6AMSzjW/buLzN6+CSnJCsN3ns4NtckzJBWDk9jMkf6E/aD Iqww== X-Gm-Message-State: APjAAAVdcdktnMZip/aV8UOXmaJJQa3Co80HCt2xuwWV7nPrfyf2KeTS dqUR91+HnQQBExcRkogh3zPR8Z/VTvI= X-Google-Smtp-Source: APXvYqyPhDKhHmxSYVhyUQmj+EJSINI2WSS/GxOr0hdmHOkH4b7A+tZdc/BaHcB1c2jqSYAWGuGBCA== X-Received: by 2002:a17:902:6ac6:: with SMTP id i6mr77136100plt.233.1560481945949; Thu, 13 Jun 2019 20:12:25 -0700 (PDT) Received: from localhost ([122.172.66.84]) by smtp.gmail.com with ESMTPSA id m6sm1217413pgr.18.2019.06.13.20.12.25 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 13 Jun 2019 20:12:25 -0700 (PDT) From: Viresh Kumar To: linux-arm-kernel@lists.infradead.org, Julien Thierry Subject: [PATCH v4.4 12/45] arm64: uaccess: Mask __user pointers for __arch_{clear, copy_*}_user Date: Fri, 14 Jun 2019 08:37:55 +0530 Message-Id: <9f68161e012c5942720575377cffd4e445446acf.1560480942.git.viresh.kumar@linaro.org> X-Mailer: git-send-email 2.21.0.rc0.269.g1a574e7a288b In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190613_201227_540703_1D2BEFE5 X-CRM114-Status: GOOD ( 12.23 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Marc Zyngier , Viresh Kumar , Will Deacon , stable@vger.kernel.org, mark.brown@arm.com, Catalin Marinas , Russell King Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP From: Will Deacon commit f71c2ffcb20dd8626880747557014bb9a61eb90e upstream. Like we've done for get_user and put_user, ensure that user pointers are masked before invoking the underlying __arch_{clear,copy_*}_user operations. Signed-off-by: Will Deacon Signed-off-by: Catalin Marinas [ v4.4: fixup for v4.4 style uaccess primitives ] Signed-off-by: Viresh Kumar --- arch/arm64/include/asm/uaccess.h | 20 ++++++++++++-------- arch/arm64/kernel/arm64ksyms.c | 4 ++-- arch/arm64/lib/clear_user.S | 6 +++--- arch/arm64/lib/copy_in_user.S | 4 ++-- 4 files changed, 19 insertions(+), 15 deletions(-) diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index 693a0d784534..a25b8726ffa9 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -303,19 +303,20 @@ do { \ extern unsigned long __must_check __arch_copy_from_user(void *to, const void __user *from, unsigned long n); extern unsigned long __must_check __arch_copy_to_user(void __user *to, const void *from, unsigned long n); -extern unsigned long __must_check __copy_in_user(void __user *to, const void __user *from, unsigned long n); -extern unsigned long __must_check __clear_user(void __user *addr, unsigned long n); +extern unsigned long __must_check __arch_copy_in_user(void __user *to, const void __user *from, unsigned long n); static inline unsigned long __must_check __copy_from_user(void *to, const void __user *from, unsigned long n) { kasan_check_write(to, n); - return __arch_copy_from_user(to, from, n); + return __arch_copy_from_user(to, __uaccess_mask_ptr(from), n); + } static inline unsigned long __must_check __copy_to_user(void __user *to, const void *from, unsigned long n) { kasan_check_read(from, n); - return __arch_copy_to_user(to, from, n); + return __arch_copy_to_user(__uaccess_mask_ptr(to), from, n); + } static inline unsigned long __must_check copy_from_user(void *to, const void __user *from, unsigned long n) @@ -338,22 +339,25 @@ static inline unsigned long __must_check copy_to_user(void __user *to, const voi return n; } -static inline unsigned long __must_check copy_in_user(void __user *to, const void __user *from, unsigned long n) +static inline unsigned long __must_check __copy_in_user(void __user *to, const void __user *from, unsigned long n) { if (access_ok(VERIFY_READ, from, n) && access_ok(VERIFY_WRITE, to, n)) - n = __copy_in_user(to, from, n); + n = __arch_copy_in_user(__uaccess_mask_ptr(to), __uaccess_mask_ptr(from), n); return n; } +#define copy_in_user __copy_in_user #define __copy_to_user_inatomic __copy_to_user #define __copy_from_user_inatomic __copy_from_user -static inline unsigned long __must_check clear_user(void __user *to, unsigned long n) +extern unsigned long __must_check __arch_clear_user(void __user *to, unsigned long n); +static inline unsigned long __must_check __clear_user(void __user *to, unsigned long n) { if (access_ok(VERIFY_WRITE, to, n)) - n = __clear_user(__uaccess_mask_ptr(to), n); + n = __arch_clear_user(__uaccess_mask_ptr(to), n); return n; } +#define clear_user __clear_user extern long strncpy_from_user(char *dest, const char __user *src, long count); diff --git a/arch/arm64/kernel/arm64ksyms.c b/arch/arm64/kernel/arm64ksyms.c index c654df05b7d7..abe4e0984dbb 100644 --- a/arch/arm64/kernel/arm64ksyms.c +++ b/arch/arm64/kernel/arm64ksyms.c @@ -35,8 +35,8 @@ EXPORT_SYMBOL(clear_page); /* user mem (segment) */ EXPORT_SYMBOL(__arch_copy_from_user); EXPORT_SYMBOL(__arch_copy_to_user); -EXPORT_SYMBOL(__clear_user); -EXPORT_SYMBOL(__copy_in_user); +EXPORT_SYMBOL(__arch_clear_user); +EXPORT_SYMBOL(__arch_copy_in_user); /* physical memory */ EXPORT_SYMBOL(memstart_addr); diff --git a/arch/arm64/lib/clear_user.S b/arch/arm64/lib/clear_user.S index a9723c71c52b..fc6bb0f83511 100644 --- a/arch/arm64/lib/clear_user.S +++ b/arch/arm64/lib/clear_user.S @@ -24,7 +24,7 @@ .text -/* Prototype: int __clear_user(void *addr, size_t sz) +/* Prototype: int __arch_clear_user(void *addr, size_t sz) * Purpose : clear some user memory * Params : addr - user memory address to clear * : sz - number of bytes to clear @@ -32,7 +32,7 @@ * * Alignment fixed up by hardware. */ -ENTRY(__clear_user) +ENTRY(__arch_clear_user) ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_HAS_PAN, \ CONFIG_ARM64_PAN) mov x2, x1 // save the size for fixup return @@ -57,7 +57,7 @@ USER(9f, strb wzr, [x0] ) ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN, \ CONFIG_ARM64_PAN) ret -ENDPROC(__clear_user) +ENDPROC(__arch_clear_user) .section .fixup,"ax" .align 2 diff --git a/arch/arm64/lib/copy_in_user.S b/arch/arm64/lib/copy_in_user.S index 81c8fc93c100..0219aa85b3cc 100644 --- a/arch/arm64/lib/copy_in_user.S +++ b/arch/arm64/lib/copy_in_user.S @@ -67,7 +67,7 @@ .endm end .req x5 -ENTRY(__copy_in_user) +ENTRY(__arch_copy_in_user) ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_HAS_PAN, \ CONFIG_ARM64_PAN) add end, x0, x2 @@ -76,7 +76,7 @@ ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN, \ CONFIG_ARM64_PAN) mov x0, #0 ret -ENDPROC(__copy_in_user) +ENDPROC(__arch_copy_in_user) .section .fixup,"ax" .align 2