From patchwork Tue Nov 21 22:01:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ilya Leoshkevich X-Patchwork-Id: 13463672 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b="kvD3L93D" Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0EA1E19AC; Tue, 21 Nov 2023 14:03:23 -0800 (PST) Received: from pps.filterd (m0353728.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 3ALLlV76020427; Tue, 21 Nov 2023 22:03:07 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=pCHaNUsH9l0hG+UVZcWLc4ZJz+vOcJVUmc7g0ZpB+XI=; b=kvD3L93Dp8l/OAopT5/Do/PY8yU4nL9iXhpUplIfQe9jD6yXKmMPnLD4pYwXfKeE/Up/ ClQYx9j9Cjh70XcKLErAl7tt7TNclKB3CWPQmdrRZQr+hlR1xFfiq/tVsyQ44AQ0dYpY y9SZjxTe7DLYgs5HZYJUCakWYrXc8yLcxlGWu4pYU49JgxQa+E16ScwEMCQmgyUsPAal pdYV9bFkvnGLprUGPSo/kk5EaVfN5TjBWb/hMyT1PfilzylU5ksIwysGuKEVYzbknZ2F ERTbtcFDJnpfwpthj57jzTALJM4z98u7jjEiwf4Qoy8y5lr98ACwxEJCK8FgrcJrQTQb 3Q== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3uh4s68c89-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 21 Nov 2023 22:03:06 +0000 Received: from m0353728.ppops.net (m0353728.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 3ALLmuPP024544; Tue, 21 Nov 2023 22:03:05 GMT Received: from ppma21.wdc07v.mail.ibm.com (5b.69.3da9.ip4.static.sl-reverse.com [169.61.105.91]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3uh4s68c7a-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 21 Nov 2023 22:03:05 +0000 Received: from pps.filterd (ppma21.wdc07v.mail.ibm.com [127.0.0.1]) by ppma21.wdc07v.mail.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 3ALLnIiM007606; Tue, 21 Nov 2023 22:03:04 GMT Received: from smtprelay03.fra02v.mail.ibm.com ([9.218.2.224]) by ppma21.wdc07v.mail.ibm.com (PPS) with ESMTPS id 3uf8knuq85-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 21 Nov 2023 22:03:04 +0000 Received: from smtpav06.fra02v.mail.ibm.com (smtpav06.fra02v.mail.ibm.com [10.20.54.105]) by smtprelay03.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 3ALM31Gt19792508 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 21 Nov 2023 22:03:01 GMT Received: from smtpav06.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 131C120065; Tue, 21 Nov 2023 22:03:01 +0000 (GMT) Received: from smtpav06.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 8F32A2005A; Tue, 21 Nov 2023 22:02:59 +0000 (GMT) Received: from heavy.boeblingen.de.ibm.com (unknown [9.179.23.98]) by smtpav06.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 21 Nov 2023 22:02:59 +0000 (GMT) From: Ilya Leoshkevich To: Alexander Gordeev , Alexander Potapenko , Andrew Morton , Christoph Lameter , David Rientjes , Heiko Carstens , Joonsoo Kim , Marco Elver , Masami Hiramatsu , Pekka Enberg , Steven Rostedt , Vasily Gorbik , Vlastimil Babka Cc: Christian Borntraeger , Dmitry Vyukov , Hyeonggon Yoo <42.hyeyoo@gmail.com>, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, Mark Rutland , Roman Gushchin , Sven Schnelle , Ilya Leoshkevich Subject: [PATCH v2 30/33] s390/uaccess: Add KMSAN support to put_user() and get_user() Date: Tue, 21 Nov 2023 23:01:24 +0100 Message-ID: <20231121220155.1217090-31-iii@linux.ibm.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231121220155.1217090-1-iii@linux.ibm.com> References: <20231121220155.1217090-1-iii@linux.ibm.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: zmsg0rsqRwIHfkoUeMMCHE1uQMZbOsJ6 X-Proofpoint-GUID: xQEF54zQDF8Y0lAb497R7bcH1qccldfU X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.987,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-11-21_12,2023-11-21_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 priorityscore=1501 spamscore=0 impostorscore=0 mlxlogscore=999 bulkscore=0 mlxscore=0 malwarescore=0 adultscore=0 phishscore=0 lowpriorityscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2311060000 definitions=main-2311210172 put_user() uses inline assembly with precise constraints, so Clang is in principle capable of instrumenting it automatically. Unfortunately, one of the constraints contains a dereferenced user pointer, and Clang does not currently distinguish user and kernel pointers. Therefore KMSAN attempts to access shadow for user pointers, which is not a right thing to do. An obvious fix to add __no_sanitize_memory to __put_user_fn() does not work, since it's __always_inline. And __always_inline cannot be removed due to the __put_user_bad() trick. A different obvious fix of using the "a" instead of the "+Q" constraint degrades the code quality, which is very important here, since it's a hot path. Instead, repurpose the __put_user_asm() macro to define __put_user_{char,short,int,long}_noinstr() functions and mark them with __no_sanitize_memory. For the non-KMSAN builds make them __always_inline in order to keep the generated code quality. Also define __put_user_{char,short,int,long}() functions, which call the aforementioned ones and which *are* instrumented, because they call KMSAN hooks, which may be implemented as macros. The same applies to get_user() as well. Acked-by: Heiko Carstens Signed-off-by: Ilya Leoshkevich --- arch/s390/include/asm/uaccess.h | 110 ++++++++++++++++++++++---------- 1 file changed, 78 insertions(+), 32 deletions(-) diff --git a/arch/s390/include/asm/uaccess.h b/arch/s390/include/asm/uaccess.h index 81ae8a98e7ec..b0715b88b55a 100644 --- a/arch/s390/include/asm/uaccess.h +++ b/arch/s390/include/asm/uaccess.h @@ -78,13 +78,23 @@ union oac { int __noreturn __put_user_bad(void); -#define __put_user_asm(to, from, size) \ -({ \ +#ifdef CONFIG_KMSAN +#define GET_PUT_USER_NOINSTR_ATTRIBUTES inline __no_sanitize_memory +#else +#define GET_PUT_USER_NOINSTR_ATTRIBUTES __always_inline +#endif + +#define DEFINE_PUT_USER(type) \ +static GET_PUT_USER_NOINSTR_ATTRIBUTES int \ +__put_user_##type##_noinstr(unsigned type __user *to, \ + unsigned type *from, \ + unsigned long size) \ +{ \ union oac __oac_spec = { \ .oac1.as = PSW_BITS_AS_SECONDARY, \ .oac1.a = 1, \ }; \ - int __rc; \ + int rc; \ \ asm volatile( \ " lr 0,%[spec]\n" \ @@ -93,12 +103,28 @@ int __noreturn __put_user_bad(void); "2:\n" \ EX_TABLE_UA_STORE(0b, 2b, %[rc]) \ EX_TABLE_UA_STORE(1b, 2b, %[rc]) \ - : [rc] "=&d" (__rc), [_to] "+Q" (*(to)) \ + : [rc] "=&d" (rc), [_to] "+Q" (*(to)) \ : [_size] "d" (size), [_from] "Q" (*(from)), \ [spec] "d" (__oac_spec.val) \ : "cc", "0"); \ - __rc; \ -}) + return rc; \ +} \ + \ +static __always_inline int \ +__put_user_##type(unsigned type __user *to, unsigned type *from, \ + unsigned long size) \ +{ \ + int rc; \ + \ + rc = __put_user_##type##_noinstr(to, from, size); \ + instrument_put_user(*from, to, size); \ + return rc; \ +} + +DEFINE_PUT_USER(char); +DEFINE_PUT_USER(short); +DEFINE_PUT_USER(int); +DEFINE_PUT_USER(long); static __always_inline int __put_user_fn(void *x, void __user *ptr, unsigned long size) { @@ -106,24 +132,24 @@ static __always_inline int __put_user_fn(void *x, void __user *ptr, unsigned lon switch (size) { case 1: - rc = __put_user_asm((unsigned char __user *)ptr, - (unsigned char *)x, - size); + rc = __put_user_char((unsigned char __user *)ptr, + (unsigned char *)x, + size); break; case 2: - rc = __put_user_asm((unsigned short __user *)ptr, - (unsigned short *)x, - size); + rc = __put_user_short((unsigned short __user *)ptr, + (unsigned short *)x, + size); break; case 4: - rc = __put_user_asm((unsigned int __user *)ptr, + rc = __put_user_int((unsigned int __user *)ptr, (unsigned int *)x, size); break; case 8: - rc = __put_user_asm((unsigned long __user *)ptr, - (unsigned long *)x, - size); + rc = __put_user_long((unsigned long __user *)ptr, + (unsigned long *)x, + size); break; default: __put_user_bad(); @@ -134,13 +160,17 @@ static __always_inline int __put_user_fn(void *x, void __user *ptr, unsigned lon int __noreturn __get_user_bad(void); -#define __get_user_asm(to, from, size) \ -({ \ +#define DEFINE_GET_USER(type) \ +static GET_PUT_USER_NOINSTR_ATTRIBUTES int \ +__get_user_##type##_noinstr(unsigned type *to, \ + unsigned type __user *from, \ + unsigned long size) \ +{ \ union oac __oac_spec = { \ .oac2.as = PSW_BITS_AS_SECONDARY, \ .oac2.a = 1, \ }; \ - int __rc; \ + int rc; \ \ asm volatile( \ " lr 0,%[spec]\n" \ @@ -149,13 +179,29 @@ int __noreturn __get_user_bad(void); "2:\n" \ EX_TABLE_UA_LOAD_MEM(0b, 2b, %[rc], %[_to], %[_ksize]) \ EX_TABLE_UA_LOAD_MEM(1b, 2b, %[rc], %[_to], %[_ksize]) \ - : [rc] "=&d" (__rc), "=Q" (*(to)) \ + : [rc] "=&d" (rc), "=Q" (*(to)) \ : [_size] "d" (size), [_from] "Q" (*(from)), \ [spec] "d" (__oac_spec.val), [_to] "a" (to), \ [_ksize] "K" (size) \ : "cc", "0"); \ - __rc; \ -}) + return rc; \ +} \ + \ +static __always_inline int \ +__get_user_##type(unsigned type *to, unsigned type __user *from, \ + unsigned long size) \ +{ \ + int rc; \ + \ + rc = __get_user_##type##_noinstr(to, from, size); \ + instrument_get_user(*to); \ + return rc; \ +} + +DEFINE_GET_USER(char); +DEFINE_GET_USER(short); +DEFINE_GET_USER(int); +DEFINE_GET_USER(long); static __always_inline int __get_user_fn(void *x, const void __user *ptr, unsigned long size) { @@ -163,24 +209,24 @@ static __always_inline int __get_user_fn(void *x, const void __user *ptr, unsign switch (size) { case 1: - rc = __get_user_asm((unsigned char *)x, - (unsigned char __user *)ptr, - size); + rc = __get_user_char((unsigned char *)x, + (unsigned char __user *)ptr, + size); break; case 2: - rc = __get_user_asm((unsigned short *)x, - (unsigned short __user *)ptr, - size); + rc = __get_user_short((unsigned short *)x, + (unsigned short __user *)ptr, + size); break; case 4: - rc = __get_user_asm((unsigned int *)x, + rc = __get_user_int((unsigned int *)x, (unsigned int __user *)ptr, size); break; case 8: - rc = __get_user_asm((unsigned long *)x, - (unsigned long __user *)ptr, - size); + rc = __get_user_long((unsigned long *)x, + (unsigned long __user *)ptr, + size); break; default: __get_user_bad();