From patchwork Wed Feb 17 08:19:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 12091185 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69C53C433DB for ; Wed, 17 Feb 2021 08:20:13 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EA12964D9A for ; Wed, 17 Feb 2021 08:20:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EA12964D9A Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.86145.161369 (Exim 4.92) (envelope-from ) id 1lCI3s-0001gB-OS; Wed, 17 Feb 2021 08:19:52 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 86145.161369; Wed, 17 Feb 2021 08:19:52 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lCI3s-0001g4-Kx; Wed, 17 Feb 2021 08:19:52 +0000 Received: by outflank-mailman (input) for mailman id 86145; Wed, 17 Feb 2021 08:19:50 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lCI3q-0001fw-Pe for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 08:19:50 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id b351726e-cebe-4d68-ac0b-110aa6d43cd9; Wed, 17 Feb 2021 08:19:49 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 30458AFF1; Wed, 17 Feb 2021 08:19:48 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b351726e-cebe-4d68-ac0b-110aa6d43cd9 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1613549988; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xgld3+LJHJd9MalP1rtZpn+6haeSaOlZWapLR/Xv3Tw=; b=MhiS8JXOXg9FZO9Ma+IyGotMI01A8R56O1bbrW/rWtCFtfCMZ+MaUq83hZNKYdnBjNKf/2 CPA08pE+UajHpKQwhdS2u/q/K00tA3yxQTgmMYWAiemFO8zwLXL9wpOy5uATYugQy4ehU8 LRNsgE8npCcOuSRf3RKmhw6wAfd0UCY= Subject: [PATCH v2 1/8] x86: split __{get,put}_user() into "guest" and "unsafe" variants From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , George Dunlap , Ian Jackson References: Message-ID: <75f52f7b-0f94-5c7c-fbd8-f2c85a8a7044@suse.com> Date: Wed, 17 Feb 2021 09:19:47 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.7.1 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US The "guest" variants are intended to work with (potentially) fully guest controlled addresses, while the "unsafe" variants are intended to be used in order to access addresses not (directly) under guest control, within Xen's part of virtual address space. (For linear page table and descriptor table accesses the low bits of the addresses may still be guest controlled, but this still won't allow speculation to "escape" into unwanted areas.) Subsequently we will want them to have distinct behavior, so as first step identify which one is which. For now, both groups of constructs alias one another. Double underscore prefixes are retained only on __{get,put}_guest(), to allow still distinguishing them from their "checking" counterparts once they also get renamed (to {get,put}_guest()). Since for them it's almost a full re-write, move what becomes {get,put}_unsafe_size() into the "common" uaccess.h (x86_64/*.h should disappear at some point anyway). In __copy_to_user() one of the two casts in each put_guest_size() invocation gets dropped. They're not needed and did break symmetry with __copy_from_user(). Signed-off-by: Jan Beulich Reviewed-by: Tim Deegan [shadow] Reviewed-by: Roger Pau Monné --- v2: Use __get_guest() in {,compat_}show_guest_stack(). --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -776,9 +776,9 @@ shadow_write_entries(void *d, void *s, i /* Because we mirror access rights at all levels in the shadow, an * l2 (or higher) entry with the RW bit cleared will leave us with * no write access through the linear map. - * We detect that by writing to the shadow with __put_user() and + * We detect that by writing to the shadow with put_unsafe() and * using map_domain_page() to get a writeable mapping if we need to. */ - if ( __put_user(*dst, dst) ) + if ( put_unsafe(*dst, dst) ) { perfc_incr(shadow_linear_map_failed); map = map_domain_page(mfn); --- a/xen/arch/x86/pv/emul-gate-op.c +++ b/xen/arch/x86/pv/emul-gate-op.c @@ -40,7 +40,7 @@ static int read_gate_descriptor(unsigned ((gate_sel >> 3) + !is_pv_32bit_vcpu(v) >= (gate_sel & 4 ? v->arch.pv.ldt_ents : v->arch.pv.gdt_ents)) || - __get_user(desc, pdesc) ) + get_unsafe(desc, pdesc) ) return 0; *sel = (desc.a >> 16) & 0x0000fffc; @@ -59,7 +59,7 @@ static int read_gate_descriptor(unsigned { if ( (*ar & 0x1f00) != 0x0c00 || /* Limit check done above already. */ - __get_user(desc, pdesc + 1) || + get_unsafe(desc, pdesc + 1) || (desc.b & 0x1f00) ) return 0; @@ -294,7 +294,7 @@ void pv_emulate_gate_op(struct cpu_user_ { \ --stkp; \ esp -= 4; \ - rc = __put_user(item, stkp); \ + rc = __put_guest(item, stkp); \ if ( rc ) \ { \ pv_inject_page_fault(PFEC_write_access, \ @@ -362,7 +362,7 @@ void pv_emulate_gate_op(struct cpu_user_ unsigned int parm; --ustkp; - rc = __get_user(parm, ustkp); + rc = __get_guest(parm, ustkp); if ( rc ) { pv_inject_page_fault(0, (unsigned long)(ustkp + 1) - rc); --- a/xen/arch/x86/pv/emulate.c +++ b/xen/arch/x86/pv/emulate.c @@ -34,13 +34,13 @@ int pv_emul_read_descriptor(unsigned int if ( sel < 4 || /* * Don't apply the GDT limit here, as the selector may be a Xen - * provided one. __get_user() will fail (without taking further + * provided one. get_unsafe() will fail (without taking further * action) for ones falling in the gap between guest populated * and Xen ones. */ ((sel & 4) && (sel >> 3) >= v->arch.pv.ldt_ents) ) desc.b = desc.a = 0; - else if ( __get_user(desc, gdt_ldt_desc_ptr(sel)) ) + else if ( get_unsafe(desc, gdt_ldt_desc_ptr(sel)) ) return 0; if ( !insn_fetch ) desc.b &= ~_SEGMENT_L; --- a/xen/arch/x86/pv/iret.c +++ b/xen/arch/x86/pv/iret.c @@ -114,15 +114,15 @@ unsigned int compat_iret(void) regs->rsp = (u32)regs->rsp; /* Restore EAX (clobbered by hypercall). */ - if ( unlikely(__get_user(regs->eax, (u32 *)regs->rsp)) ) + if ( unlikely(__get_guest(regs->eax, (u32 *)regs->rsp)) ) { domain_crash(v->domain); return 0; } /* Restore CS and EIP. */ - if ( unlikely(__get_user(regs->eip, (u32 *)regs->rsp + 1)) || - unlikely(__get_user(regs->cs, (u32 *)regs->rsp + 2)) ) + if ( unlikely(__get_guest(regs->eip, (u32 *)regs->rsp + 1)) || + unlikely(__get_guest(regs->cs, (u32 *)regs->rsp + 2)) ) { domain_crash(v->domain); return 0; @@ -132,7 +132,7 @@ unsigned int compat_iret(void) * Fix up and restore EFLAGS. We fix up in a local staging area * to avoid firing the BUG_ON(IOPL) check in arch_get_info_guest. */ - if ( unlikely(__get_user(eflags, (u32 *)regs->rsp + 3)) ) + if ( unlikely(__get_guest(eflags, (u32 *)regs->rsp + 3)) ) { domain_crash(v->domain); return 0; @@ -164,16 +164,16 @@ unsigned int compat_iret(void) { for (i = 1; i < 10; ++i) { - rc |= __get_user(x, (u32 *)regs->rsp + i); - rc |= __put_user(x, (u32 *)(unsigned long)ksp + i); + rc |= __get_guest(x, (u32 *)regs->rsp + i); + rc |= __put_guest(x, (u32 *)(unsigned long)ksp + i); } } else if ( ksp > regs->esp ) { for ( i = 9; i > 0; --i ) { - rc |= __get_user(x, (u32 *)regs->rsp + i); - rc |= __put_user(x, (u32 *)(unsigned long)ksp + i); + rc |= __get_guest(x, (u32 *)regs->rsp + i); + rc |= __put_guest(x, (u32 *)(unsigned long)ksp + i); } } if ( rc ) @@ -189,7 +189,7 @@ unsigned int compat_iret(void) eflags &= ~X86_EFLAGS_IF; regs->eflags &= ~(X86_EFLAGS_VM|X86_EFLAGS_RF| X86_EFLAGS_NT|X86_EFLAGS_TF); - if ( unlikely(__put_user(0, (u32 *)regs->rsp)) ) + if ( unlikely(__put_guest(0, (u32 *)regs->rsp)) ) { domain_crash(v->domain); return 0; @@ -205,8 +205,8 @@ unsigned int compat_iret(void) else if ( ring_1(regs) ) regs->esp += 16; /* Return to ring 2/3: restore ESP and SS. */ - else if ( __get_user(regs->ss, (u32 *)regs->rsp + 5) || - __get_user(regs->esp, (u32 *)regs->rsp + 4) ) + else if ( __get_guest(regs->ss, (u32 *)regs->rsp + 5) || + __get_guest(regs->esp, (u32 *)regs->rsp + 4) ) { domain_crash(v->domain); return 0; --- a/xen/arch/x86/traps.c +++ b/xen/arch/x86/traps.c @@ -274,7 +274,7 @@ static void compat_show_guest_stack(stru { if ( (((long)stack - 1) ^ ((long)(stack + 1) - 1)) & mask ) break; - if ( __get_user(addr, stack) ) + if ( __get_guest(addr, stack) ) { if ( i != 0 ) printk("\n "); @@ -343,7 +343,7 @@ static void show_guest_stack(struct vcpu { if ( (((long)stack - 1) ^ ((long)(stack + 1) - 1)) & mask ) break; - if ( __get_user(addr, stack) ) + if ( __get_guest(addr, stack) ) { if ( i != 0 ) printk("\n "); --- a/xen/include/asm-x86/uaccess.h +++ b/xen/include/asm-x86/uaccess.h @@ -59,13 +59,11 @@ extern void __put_user_bad(void); __put_user_check((__typeof__(*(ptr)))(x),(ptr),sizeof(*(ptr))) /** - * __get_user: - Get a simple variable from user space, with less checking. + * __get_guest: - Get a simple variable from guest space, with less checking. * @x: Variable to store result. - * @ptr: Source address, in user space. + * @ptr: Source address, in guest space. * - * Context: User context only. This function may sleep. - * - * This macro copies a single simple variable from user space to kernel + * This macro copies a single simple variable from guest space to hypervisor * space. It supports simple types like char and int, but not larger * data types like structures or arrays. * @@ -78,17 +76,15 @@ extern void __put_user_bad(void); * Returns zero on success, or -EFAULT on error. * On error, the variable @x is set to zero. */ -#define __get_user(x,ptr) \ - __get_user_nocheck((x),(ptr),sizeof(*(ptr))) +#define __get_guest(x, ptr) get_guest_nocheck(x, ptr, sizeof(*(ptr))) +#define get_unsafe __get_guest /** - * __put_user: - Write a simple value into user space, with less checking. - * @x: Value to copy to user space. - * @ptr: Destination address, in user space. + * __put_guest: - Write a simple value into guest space, with less checking. + * @x: Value to store in guest space. + * @ptr: Destination address, in guest space. * - * Context: User context only. This function may sleep. - * - * This macro copies a single simple value from kernel space to user + * This macro copies a single simple value from hypervisor space to guest * space. It supports simple types like char and int, but not larger * data types like structures or arrays. * @@ -100,13 +96,14 @@ extern void __put_user_bad(void); * * Returns zero on success, or -EFAULT on error. */ -#define __put_user(x,ptr) \ - __put_user_nocheck((__typeof__(*(ptr)))(x),(ptr),sizeof(*(ptr))) +#define __put_guest(x, ptr) \ + put_guest_nocheck((__typeof__(*(ptr)))(x), ptr, sizeof(*(ptr))) +#define put_unsafe __put_guest -#define __put_user_nocheck(x, ptr, size) \ +#define put_guest_nocheck(x, ptr, size) \ ({ \ int err_; \ - __put_user_size(x, ptr, size, err_, -EFAULT); \ + put_guest_size(x, ptr, size, err_, -EFAULT); \ err_; \ }) @@ -114,14 +111,14 @@ extern void __put_user_bad(void); ({ \ __typeof__(*(ptr)) __user *ptr_ = (ptr); \ __typeof__(size) size_ = (size); \ - access_ok(ptr_, size_) ? __put_user_nocheck(x, ptr_, size_) \ + access_ok(ptr_, size_) ? put_guest_nocheck(x, ptr_, size_) \ : -EFAULT; \ }) -#define __get_user_nocheck(x, ptr, size) \ +#define get_guest_nocheck(x, ptr, size) \ ({ \ int err_; \ - __get_user_size(x, ptr, size, err_, -EFAULT); \ + get_guest_size(x, ptr, size, err_, -EFAULT); \ err_; \ }) @@ -129,7 +126,7 @@ extern void __put_user_bad(void); ({ \ __typeof__(*(ptr)) __user *ptr_ = (ptr); \ __typeof__(size) size_ = (size); \ - access_ok(ptr_, size_) ? __get_user_nocheck(x, ptr_, size_) \ + access_ok(ptr_, size_) ? get_guest_nocheck(x, ptr_, size_) \ : -EFAULT; \ }) @@ -141,7 +138,7 @@ struct __large_struct { unsigned long bu * we do not write to any memory gcc knows about, so there are no * aliasing issues. */ -#define __put_user_asm(x, addr, err, itype, rtype, ltype, errret) \ +#define put_unsafe_asm(x, addr, err, itype, rtype, ltype, errret) \ stac(); \ __asm__ __volatile__( \ "1: mov"itype" %"rtype"1,%2\n" \ @@ -155,7 +152,7 @@ struct __large_struct { unsigned long bu : ltype (x), "m"(__m(addr)), "i"(errret), "0"(err)); \ clac() -#define __get_user_asm(x, addr, err, itype, rtype, ltype, errret) \ +#define get_unsafe_asm(x, addr, err, itype, rtype, ltype, errret) \ stac(); \ __asm__ __volatile__( \ "1: mov"itype" %2,%"rtype"1\n" \ @@ -170,6 +167,34 @@ struct __large_struct { unsigned long bu : "m"(__m(addr)), "i"(errret), "0"(err)); \ clac() +#define put_unsafe_size(x, ptr, size, retval, errret) \ +do { \ + retval = 0; \ + switch ( size ) \ + { \ + case 1: put_unsafe_asm(x, ptr, retval, "b", "b", "iq", errret); break; \ + case 2: put_unsafe_asm(x, ptr, retval, "w", "w", "ir", errret); break; \ + case 4: put_unsafe_asm(x, ptr, retval, "l", "k", "ir", errret); break; \ + case 8: put_unsafe_asm(x, ptr, retval, "q", "", "ir", errret); break; \ + default: __put_user_bad(); \ + } \ +} while ( false ) +#define put_guest_size put_unsafe_size + +#define get_unsafe_size(x, ptr, size, retval, errret) \ +do { \ + retval = 0; \ + switch ( size ) \ + { \ + case 1: get_unsafe_asm(x, ptr, retval, "b", "b", "=q", errret); break; \ + case 2: get_unsafe_asm(x, ptr, retval, "w", "w", "=r", errret); break; \ + case 4: get_unsafe_asm(x, ptr, retval, "l", "k", "=r", errret); break; \ + case 8: get_unsafe_asm(x, ptr, retval, "q", "", "=r", errret); break; \ + default: __get_user_bad(); \ + } \ +} while ( false ) +#define get_guest_size get_unsafe_size + /** * __copy_to_user: - Copy a block of data into user space, with less checking * @to: Destination address, in user space. @@ -192,16 +217,16 @@ __copy_to_user(void __user *to, const vo switch (n) { case 1: - __put_user_size(*(const u8 *)from, (u8 __user *)to, 1, ret, 1); + put_guest_size(*(const uint8_t *)from, to, 1, ret, 1); return ret; case 2: - __put_user_size(*(const u16 *)from, (u16 __user *)to, 2, ret, 2); + put_guest_size(*(const uint16_t *)from, to, 2, ret, 2); return ret; case 4: - __put_user_size(*(const u32 *)from, (u32 __user *)to, 4, ret, 4); + put_guest_size(*(const uint32_t *)from, to, 4, ret, 4); return ret; case 8: - __put_user_size(*(const u64 *)from, (u64 __user *)to, 8, ret, 8); + put_guest_size(*(const uint64_t *)from, to, 8, ret, 8); return ret; } } @@ -233,16 +258,16 @@ __copy_from_user(void *to, const void __ switch (n) { case 1: - __get_user_size(*(u8 *)to, from, 1, ret, 1); + get_guest_size(*(uint8_t *)to, from, 1, ret, 1); return ret; case 2: - __get_user_size(*(u16 *)to, from, 2, ret, 2); + get_guest_size(*(uint16_t *)to, from, 2, ret, 2); return ret; case 4: - __get_user_size(*(u32 *)to, from, 4, ret, 4); + get_guest_size(*(uint32_t *)to, from, 4, ret, 4); return ret; case 8: - __get_user_size(*(u64*)to, from, 8, ret, 8); + get_guest_size(*(uint64_t *)to, from, 8, ret, 8); return ret; } } --- a/xen/include/asm-x86/x86_64/uaccess.h +++ b/xen/include/asm-x86/x86_64/uaccess.h @@ -57,28 +57,4 @@ extern void *xlat_malloc(unsigned long * (likely((count) < (~0U / (size))) && \ compat_access_ok(addr, 0 + (count) * (size))) -#define __put_user_size(x,ptr,size,retval,errret) \ -do { \ - retval = 0; \ - switch (size) { \ - case 1: __put_user_asm(x,ptr,retval,"b","b","iq",errret);break; \ - case 2: __put_user_asm(x,ptr,retval,"w","w","ir",errret);break; \ - case 4: __put_user_asm(x,ptr,retval,"l","k","ir",errret);break; \ - case 8: __put_user_asm(x,ptr,retval,"q","","ir",errret);break; \ - default: __put_user_bad(); \ - } \ -} while (0) - -#define __get_user_size(x,ptr,size,retval,errret) \ -do { \ - retval = 0; \ - switch (size) { \ - case 1: __get_user_asm(x,ptr,retval,"b","b","=q",errret);break; \ - case 2: __get_user_asm(x,ptr,retval,"w","w","=r",errret);break; \ - case 4: __get_user_asm(x,ptr,retval,"l","k","=r",errret);break; \ - case 8: __get_user_asm(x,ptr,retval,"q","","=r",errret); break; \ - default: __get_user_bad(); \ - } \ -} while (0) - #endif /* __X86_64_UACCESS_H */ --- a/xen/test/livepatch/xen_hello_world_func.c +++ b/xen/test/livepatch/xen_hello_world_func.c @@ -26,7 +26,7 @@ const char *xen_hello_world(void) * Any BUG, or WARN_ON will contain symbol and payload name. Furthermore * exceptions will be caught and processed properly. */ - rc = __get_user(tmp, non_canonical_addr); + rc = get_unsafe(tmp, non_canonical_addr); BUG_ON(rc != -EFAULT); #endif #if defined(CONFIG_ARM) From patchwork Wed Feb 17 08:20:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 12091187 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC1E9C433DB for ; Wed, 17 Feb 2021 08:20:24 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7648464D9A for ; Wed, 17 Feb 2021 08:20:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7648464D9A Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.86146.161381 (Exim 4.92) (envelope-from ) id 1lCI4C-0002QZ-0S; Wed, 17 Feb 2021 08:20:12 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 86146.161381; Wed, 17 Feb 2021 08:20:11 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lCI4B-0002QS-TM; Wed, 17 Feb 2021 08:20:11 +0000 Received: by outflank-mailman (input) for mailman id 86146; Wed, 17 Feb 2021 08:20:10 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lCI4A-0002QE-FZ for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 08:20:10 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id a7f5c84d-d7de-4925-83b0-53a06968ec40; Wed, 17 Feb 2021 08:20:09 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 39539B018; Wed, 17 Feb 2021 08:20:08 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a7f5c84d-d7de-4925-83b0-53a06968ec40 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1613550008; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WrItjwsHf1tf64EfZyuTUmUrclrGErnnNK1/2FkMJ0A=; b=lDfHAN1BxTtAtW9pelpvtlnbkbGnb207wTX7rG9mrcyNCG8DVq0xUeHIEP3rVBq3uM1zTS +Fjj+ENw1DKHKMxiSlv1DbkzYa/FWE6meQOK90+eoVu0qd6JeNJz5SAKv3pV81J9AyUh0b YRbZrd6uT0ZTAa0te2+SpkGyHpjDej8= Subject: [PATCH v2 2/8] x86: split __copy_{from,to}_user() into "guest" and "unsafe" variants From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , George Dunlap , Ian Jackson References: Message-ID: <2c7530b5-5e56-bac8-6011-6c3a6aa529fa@suse.com> Date: Wed, 17 Feb 2021 09:20:07 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.7.1 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US The "guest" variants are intended to work with (potentially) fully guest controlled addresses, while the "unsafe" variants are intended to be used in order to access addresses not (directly) under guest control, within Xen's part of virtual address space. Subsequently we will want them to have distinct behavior, so as first step identify which one is which. For now, both groups of constructs alias one another. Double underscore prefixes are retained only on __copy_{from,to}_guest_pv(), to allow still distinguishing them from their "checking" counterparts once they also get renamed (to copy_{from,to}_guest_pv()). Add previously missing __user at some call sites. Signed-off-by: Jan Beulich Reviewed-by: Tim Deegan [shadow] Reviewed-by: Roger Pau Monné --- Instead of __copy_{from,to}_guest_pv(), perhaps name them just __copy_{from,to}_pv()? --- a/xen/arch/x86/gdbstub.c +++ b/xen/arch/x86/gdbstub.c @@ -33,13 +33,13 @@ gdb_arch_signal_num(struct cpu_user_regs unsigned int gdb_arch_copy_from_user(void *dest, const void *src, unsigned len) { - return __copy_from_user(dest, src, len); + return copy_from_unsafe(dest, src, len); } unsigned int gdb_arch_copy_to_user(void *dest, const void *src, unsigned len) { - return __copy_to_user(dest, src, len); + return copy_to_unsafe(dest, src, len); } void --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -2614,7 +2614,7 @@ static int sh_page_fault(struct vcpu *v, { shadow_l2e_t sl2e; mfn_t gl1mfn; - if ( (__copy_from_user(&sl2e, + if ( (copy_from_unsafe(&sl2e, (sh_linear_l2_table(v) + shadow_l2_linear_offset(va)), sizeof(sl2e)) != 0) @@ -2633,7 +2633,7 @@ static int sh_page_fault(struct vcpu *v, #endif /* SHOPT_OUT_OF_SYNC */ /* The only reasons for reserved bits to be set in shadow entries * are the two "magic" shadow_l1e entries. */ - if ( likely((__copy_from_user(&sl1e, + if ( likely((copy_from_unsafe(&sl1e, (sh_linear_l1_table(v) + shadow_l1_linear_offset(va)), sizeof(sl1e)) == 0) @@ -3308,10 +3308,10 @@ static bool sh_invlpg(struct vcpu *v, un sh_linear_l4_table(v)[shadow_l4_linear_offset(linear)]) & _PAGE_PRESENT) ) return false; - /* This must still be a copy-from-user because we don't have the + /* This must still be a copy-from-unsafe because we don't have the * paging lock, and the higher-level shadows might disappear * under our feet. */ - if ( __copy_from_user(&sl3e, (sh_linear_l3_table(v) + if ( copy_from_unsafe(&sl3e, (sh_linear_l3_table(v) + shadow_l3_linear_offset(linear)), sizeof (sl3e)) != 0 ) { @@ -3330,9 +3330,9 @@ static bool sh_invlpg(struct vcpu *v, un return false; #endif - /* This must still be a copy-from-user because we don't have the shadow + /* This must still be a copy-from-unsafe because we don't have the shadow * lock, and the higher-level shadows might disappear under our feet. */ - if ( __copy_from_user(&sl2e, + if ( copy_from_unsafe(&sl2e, sh_linear_l2_table(v) + shadow_l2_linear_offset(linear), sizeof (sl2e)) != 0 ) { @@ -3371,11 +3371,11 @@ static bool sh_invlpg(struct vcpu *v, un * hold the paging lock yet. Check again with the lock held. */ paging_lock(d); - /* This must still be a copy-from-user because we didn't + /* This must still be a copy-from-unsafe because we didn't * have the paging lock last time we checked, and the * higher-level shadows might have disappeared under our * feet. */ - if ( __copy_from_user(&sl2e, + if ( copy_from_unsafe(&sl2e, sh_linear_l2_table(v) + shadow_l2_linear_offset(linear), sizeof (sl2e)) != 0 ) --- a/xen/arch/x86/pv/emul-gate-op.c +++ b/xen/arch/x86/pv/emul-gate-op.c @@ -149,12 +149,12 @@ static int read_mem(enum x86_segment seg addr = (uint32_t)addr; - if ( (rc = __copy_from_user(p_data, (void *)addr, bytes)) ) + if ( (rc = __copy_from_guest_pv(p_data, (void __user *)addr, bytes)) ) { /* * TODO: This should report PFEC_insn_fetch when goc->insn_fetch && * cpu_has_nx, but we'd then need a "fetch" variant of - * __copy_from_user() respecting NX, SMEP, and protection keys. + * __copy_from_guest_pv() respecting NX, SMEP, and protection keys. */ x86_emul_pagefault(0, addr + bytes - rc, ctxt); return X86EMUL_EXCEPTION; --- a/xen/arch/x86/pv/emul-priv-op.c +++ b/xen/arch/x86/pv/emul-priv-op.c @@ -649,7 +649,8 @@ static int rep_ins(uint16_t port, if ( rc != X86EMUL_OKAY ) return rc; - if ( (rc = __copy_to_user((void *)addr, &data, bytes_per_rep)) != 0 ) + if ( (rc = __copy_to_guest_pv((void __user *)addr, &data, + bytes_per_rep)) != 0 ) { x86_emul_pagefault(PFEC_write_access, addr + bytes_per_rep - rc, ctxt); @@ -716,7 +717,8 @@ static int rep_outs(enum x86_segment seg if ( rc != X86EMUL_OKAY ) return rc; - if ( (rc = __copy_from_user(&data, (void *)addr, bytes_per_rep)) != 0 ) + if ( (rc = __copy_from_guest_pv(&data, (void __user *)addr, + bytes_per_rep)) != 0 ) { x86_emul_pagefault(0, addr + bytes_per_rep - rc, ctxt); return X86EMUL_EXCEPTION; @@ -1253,12 +1255,12 @@ static int insn_fetch(enum x86_segment s if ( rc != X86EMUL_OKAY ) return rc; - if ( (rc = __copy_from_user(p_data, (void *)addr, bytes)) != 0 ) + if ( (rc = __copy_from_guest_pv(p_data, (void __user *)addr, bytes)) != 0 ) { /* * TODO: This should report PFEC_insn_fetch when goc->insn_fetch && * cpu_has_nx, but we'd then need a "fetch" variant of - * __copy_from_user() respecting NX, SMEP, and protection keys. + * __copy_from_guest_pv() respecting NX, SMEP, and protection keys. */ x86_emul_pagefault(0, addr + bytes - rc, ctxt); return X86EMUL_EXCEPTION; --- a/xen/arch/x86/pv/mm.c +++ b/xen/arch/x86/pv/mm.c @@ -41,7 +41,7 @@ l1_pgentry_t *map_guest_l1e(unsigned lon return NULL; /* Find this l1e and its enclosing l1mfn in the linear map. */ - if ( __copy_from_user(&l2e, + if ( copy_from_unsafe(&l2e, &__linear_l2_table[l2_linear_offset(linear)], sizeof(l2_pgentry_t)) ) return NULL; --- a/xen/arch/x86/pv/mm.h +++ b/xen/arch/x86/pv/mm.h @@ -22,7 +22,7 @@ static inline l1_pgentry_t guest_get_eff toggle_guest_pt(curr); if ( unlikely(!__addr_ok(linear)) || - __copy_from_user(&l1e, + copy_from_unsafe(&l1e, &__linear_l1_table[l1_linear_offset(linear)], sizeof(l1_pgentry_t)) ) l1e = l1e_empty(); --- a/xen/arch/x86/pv/ro-page-fault.c +++ b/xen/arch/x86/pv/ro-page-fault.c @@ -43,7 +43,7 @@ static int ptwr_emulated_read(enum x86_s unsigned long addr = offset; if ( !__addr_ok(addr) || - (rc = __copy_from_user(p_data, (void *)addr, bytes)) ) + (rc = __copy_from_guest_pv(p_data, (void *)addr, bytes)) ) { x86_emul_pagefault(0, addr + bytes - rc, ctxt); /* Read fault. */ return X86EMUL_EXCEPTION; --- a/xen/arch/x86/traps.c +++ b/xen/arch/x86/traps.c @@ -1103,7 +1103,7 @@ void do_invalid_op(struct cpu_user_regs } if ( !is_active_kernel_text(regs->rip) || - __copy_from_user(bug_insn, eip, sizeof(bug_insn)) || + copy_from_unsafe(bug_insn, eip, sizeof(bug_insn)) || memcmp(bug_insn, "\xf\xb", sizeof(bug_insn)) ) goto die; --- a/xen/arch/x86/usercopy.c +++ b/xen/arch/x86/usercopy.c @@ -110,7 +110,7 @@ unsigned __copy_from_user_ll(void *to, c unsigned copy_to_user(void __user *to, const void *from, unsigned n) { if ( access_ok(to, n) ) - n = __copy_to_user(to, from, n); + n = __copy_to_guest_pv(to, from, n); return n; } @@ -168,7 +168,7 @@ unsigned clear_user(void __user *to, uns unsigned copy_from_user(void *to, const void __user *from, unsigned n) { if ( access_ok(from, n) ) - n = __copy_from_user(to, from, n); + n = __copy_from_guest_pv(to, from, n); else memset(to, 0, n); return n; --- a/xen/include/asm-x86/guest_access.h +++ b/xen/include/asm-x86/guest_access.h @@ -28,11 +28,11 @@ #define __raw_copy_to_guest(dst, src, len) \ (is_hvm_vcpu(current) ? \ copy_to_user_hvm((dst), (src), (len)) : \ - __copy_to_user((dst), (src), (len))) + __copy_to_guest_pv(dst, src, len)) #define __raw_copy_from_guest(dst, src, len) \ (is_hvm_vcpu(current) ? \ copy_from_user_hvm((dst), (src), (len)) : \ - __copy_from_user((dst), (src), (len))) + __copy_from_guest_pv(dst, src, len)) #define __raw_clear_guest(dst, len) \ (is_hvm_vcpu(current) ? \ clear_user_hvm((dst), (len)) : \ --- a/xen/include/asm-x86/uaccess.h +++ b/xen/include/asm-x86/uaccess.h @@ -196,21 +196,20 @@ do { #define get_guest_size get_unsafe_size /** - * __copy_to_user: - Copy a block of data into user space, with less checking - * @to: Destination address, in user space. - * @from: Source address, in kernel space. + * __copy_to_guest_pv: - Copy a block of data into guest space, with less + * checking + * @to: Destination address, in guest space. + * @from: Source address, in hypervisor space. * @n: Number of bytes to copy. * - * Context: User context only. This function may sleep. - * - * Copy data from kernel space to user space. Caller must check + * Copy data from hypervisor space to guest space. Caller must check * the specified block with access_ok() before calling this function. * * Returns number of bytes that could not be copied. * On success, this will be zero. */ static always_inline unsigned long -__copy_to_user(void __user *to, const void *from, unsigned long n) +__copy_to_guest_pv(void __user *to, const void *from, unsigned long n) { if (__builtin_constant_p(n)) { unsigned long ret; @@ -232,16 +231,16 @@ __copy_to_user(void __user *to, const vo } return __copy_to_user_ll(to, from, n); } +#define copy_to_unsafe __copy_to_guest_pv /** - * __copy_from_user: - Copy a block of data from user space, with less checking - * @to: Destination address, in kernel space. - * @from: Source address, in user space. + * __copy_from_guest_pv: - Copy a block of data from guest space, with less + * checking + * @to: Destination address, in hypervisor space. + * @from: Source address, in guest space. * @n: Number of bytes to copy. * - * Context: User context only. This function may sleep. - * - * Copy data from user space to kernel space. Caller must check + * Copy data from guest space to hypervisor space. Caller must check * the specified block with access_ok() before calling this function. * * Returns number of bytes that could not be copied. @@ -251,7 +250,7 @@ __copy_to_user(void __user *to, const vo * data to the requested size using zero bytes. */ static always_inline unsigned long -__copy_from_user(void *to, const void __user *from, unsigned long n) +__copy_from_guest_pv(void *to, const void __user *from, unsigned long n) { if (__builtin_constant_p(n)) { unsigned long ret; @@ -273,6 +272,7 @@ __copy_from_user(void *to, const void __ } return __copy_from_user_ll(to, from, n); } +#define copy_from_unsafe __copy_from_guest_pv /* * The exception table consists of pairs of addresses: the first is the From patchwork Wed Feb 17 08:20:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 12091189 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D766C433DB for ; Wed, 17 Feb 2021 08:20:58 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AC3A764E2E for ; Wed, 17 Feb 2021 08:20:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AC3A764E2E Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.86148.161392 (Exim 4.92) (envelope-from ) id 1lCI4h-0002Xg-DG; Wed, 17 Feb 2021 08:20:43 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 86148.161392; Wed, 17 Feb 2021 08:20:43 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lCI4h-0002XZ-A5; Wed, 17 Feb 2021 08:20:43 +0000 Received: by outflank-mailman (input) for mailman id 86148; Wed, 17 Feb 2021 08:20:42 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lCI4g-0002XP-5s for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 08:20:42 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 362f80be-5f20-478f-9872-21791666509f; Wed, 17 Feb 2021 08:20:39 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 6D93AB018; Wed, 17 Feb 2021 08:20:38 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 362f80be-5f20-478f-9872-21791666509f X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1613550038; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1IZ8g9n5ogU05h+YXNeRGMJ3ZPl5ClDvsrVFXSKMzx8=; b=DVapiHp7OScD0KAm6Rk7W5nVKdedKh47c3q7IYxBMcyTJ8yR85WZkwz/66EdilQcR7veLf umYxLlNa42rULEbWPtxffWRXd09nqy0VlmAA9ILJcNLgKiZPOEH771lKux+E6OtCeVfIZT 8MgecPW0flbUOVg6Wqdq8vsYIxm29xY= Subject: [PATCH v2 3/8] x86/PV: harden guest memory accesses against speculative abuse From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , George Dunlap , Ian Jackson References: Message-ID: Date: Wed, 17 Feb 2021 09:20:37 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.7.1 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US Inspired by https://lore.kernel.org/lkml/f12e7d3cecf41b2c29734ea45a393be21d4a8058.1597848273.git.jpoimboe@redhat.com/ and prior work in that area of x86 Linux, suppress speculation with guest specified pointer values by suitably masking the addresses to non-canonical space in case they fall into Xen's virtual address range. Introduce a new Kconfig control. Note that it is necessary in such code to avoid using "m" kind operands: If we didn't, there would be no guarantee that the register passed to guest_access_mask_ptr is also the (base) one used for the memory access. As a minor unrelated change in get_unsafe_asm() the unnecessary "itype" parameter gets dropped and the XOR on the fixup path gets changed to be a 32-bit one in all cases: This way we avoid pointless REX.W or operand size overrides, or writes to partial registers. Requested-by: Andrew Cooper Signed-off-by: Jan Beulich Reviewed-by: Roger Pau Monné --- v2: Add comment to assembler macro. --- The insn sequence chosen is certainly up for discussion; I've picked this one despite the RCR because alternatives I could come up with, like mov $(HYPERVISOR_VIRT_END), %rax mov $~0, %rdx mov $0x7fffffffffffffff, %rcx cmp %rax, %rdi cmovb %rcx, %rdx and %rdx, %rdi weren't necessarily better: Either, as above, they are longer and require a 3rd scratch register, or they also utilize the carry flag in some similar way. --- Judging from the comment ahead of put_unsafe_asm() we might as well not tell gcc at all anymore about the memory access there, now that there's no use of the operand anymore in the assembly code. --- a/xen/arch/x86/usercopy.c +++ b/xen/arch/x86/usercopy.c @@ -10,12 +10,19 @@ #include #include -unsigned __copy_to_user_ll(void __user *to, const void *from, unsigned n) +#ifndef GUARD +# define GUARD UA_KEEP +#endif + +unsigned int copy_to_guest_ll(void __user *to, const void *from, unsigned int n) { unsigned dummy; stac(); asm volatile ( + GUARD( + " guest_access_mask_ptr %[to], %q[scratch1], %q[scratch2]\n" + ) " cmp $"STR(2*BYTES_PER_LONG-1)", %[cnt]\n" " jbe 1f\n" " mov %k[to], %[cnt]\n" @@ -42,6 +49,7 @@ unsigned __copy_to_user_ll(void __user * _ASM_EXTABLE(1b, 2b) : [cnt] "+c" (n), [to] "+D" (to), [from] "+S" (from), [aux] "=&r" (dummy) + GUARD(, [scratch1] "=&r" (dummy), [scratch2] "=&r" (dummy)) : "[aux]" (n) : "memory" ); clac(); @@ -49,12 +57,15 @@ unsigned __copy_to_user_ll(void __user * return n; } -unsigned __copy_from_user_ll(void *to, const void __user *from, unsigned n) +unsigned int copy_from_guest_ll(void *to, const void __user *from, unsigned int n) { unsigned dummy; stac(); asm volatile ( + GUARD( + " guest_access_mask_ptr %[from], %q[scratch1], %q[scratch2]\n" + ) " cmp $"STR(2*BYTES_PER_LONG-1)", %[cnt]\n" " jbe 1f\n" " mov %k[to], %[cnt]\n" @@ -87,6 +98,7 @@ unsigned __copy_from_user_ll(void *to, c _ASM_EXTABLE(1b, 6b) : [cnt] "+c" (n), [to] "+D" (to), [from] "+S" (from), [aux] "=&r" (dummy) + GUARD(, [scratch1] "=&r" (dummy), [scratch2] "=&r" (dummy)) : "[aux]" (n) : "memory" ); clac(); @@ -94,6 +106,8 @@ unsigned __copy_from_user_ll(void *to, c return n; } +#if GUARD(1) + 0 + /** * copy_to_user: - Copy a block of data into user space. * @to: Destination address, in user space. @@ -128,8 +142,11 @@ unsigned clear_user(void __user *to, uns { if ( access_ok(to, n) ) { + long dummy; + stac(); asm volatile ( + " guest_access_mask_ptr %[to], %[scratch1], %[scratch2]\n" "0: rep stos"__OS"\n" " mov %[bytes], %[cnt]\n" "1: rep stosb\n" @@ -140,7 +157,8 @@ unsigned clear_user(void __user *to, uns ".previous\n" _ASM_EXTABLE(0b,3b) _ASM_EXTABLE(1b,2b) - : [cnt] "=&c" (n), [to] "+D" (to) + : [cnt] "=&c" (n), [to] "+D" (to), [scratch1] "=&r" (dummy), + [scratch2] "=&r" (dummy) : [bytes] "r" (n & (BYTES_PER_LONG - 1)), [longs] "0" (n / BYTES_PER_LONG), "a" (0) ); clac(); @@ -174,6 +192,16 @@ unsigned copy_from_user(void *to, const return n; } +# undef GUARD +# define GUARD UA_DROP +# define copy_to_guest_ll copy_to_unsafe_ll +# define copy_from_guest_ll copy_from_unsafe_ll +# undef __user +# define __user +# include __FILE__ + +#endif /* GUARD(1) */ + /* * Local variables: * mode: C --- a/xen/arch/x86/x86_64/entry.S +++ b/xen/arch/x86/x86_64/entry.S @@ -458,6 +458,8 @@ UNLIKELY_START(g, create_bounce_frame_ba jmp asm_domain_crash_synchronous /* Does not return */ __UNLIKELY_END(create_bounce_frame_bad_sp) + guest_access_mask_ptr %rsi, %rax, %rcx + #define STORE_GUEST_STACK(reg, n) \ 0: movq %reg,(n)*8(%rsi); \ _ASM_EXTABLE(0b, domain_crash_page_fault_ ## n ## x8) --- a/xen/common/Kconfig +++ b/xen/common/Kconfig @@ -111,6 +111,24 @@ config SPECULATIVE_HARDEN_BRANCH If unsure, say Y. +config SPECULATIVE_HARDEN_GUEST_ACCESS + bool "Speculative PV Guest Memory Access Hardening" + default y + depends on PV + help + Contemporary processors may use speculative execution as a + performance optimisation, but this can potentially be abused by an + attacker to leak data via speculative sidechannels. + + One source of data leakage is via speculative accesses to hypervisor + memory through guest controlled values used to access guest memory. + + When enabled, code paths accessing PV guest memory will have guest + controlled addresses massaged such that memory accesses through them + won't touch hypervisor address space. + + If unsure, say Y. + endmenu config HYPFS --- a/xen/include/asm-x86/asm-defns.h +++ b/xen/include/asm-x86/asm-defns.h @@ -56,3 +56,23 @@ .macro INDIRECT_JMP arg:req INDIRECT_BRANCH jmp \arg .endm + +.macro guest_access_mask_ptr ptr:req, scratch1:req, scratch2:req +#if defined(CONFIG_SPECULATIVE_HARDEN_GUEST_ACCESS) + /* + * Here we want + * + * ptr &= ~0ull >> (ptr < HYPERVISOR_VIRT_END); + * + * but guaranteed without any conditional branches (hence in assembly). + */ + mov $(HYPERVISOR_VIRT_END - 1), \scratch1 + mov $~0, \scratch2 + cmp \ptr, \scratch1 + rcr $1, \scratch2 + and \scratch2, \ptr +#elif defined(CONFIG_DEBUG) && defined(CONFIG_PV) + xor $~\@, \scratch1 + xor $~\@, \scratch2 +#endif +.endm --- a/xen/include/asm-x86/uaccess.h +++ b/xen/include/asm-x86/uaccess.h @@ -12,13 +12,19 @@ unsigned copy_to_user(void *to, const void *from, unsigned len); unsigned clear_user(void *to, unsigned len); unsigned copy_from_user(void *to, const void *from, unsigned len); + /* Handles exceptions in both to and from, but doesn't do access_ok */ -unsigned __copy_to_user_ll(void __user*to, const void *from, unsigned n); -unsigned __copy_from_user_ll(void *to, const void __user *from, unsigned n); +unsigned int copy_to_guest_ll(void __user*to, const void *from, unsigned int n); +unsigned int copy_from_guest_ll(void *to, const void __user *from, unsigned int n); +unsigned int copy_to_unsafe_ll(void *to, const void *from, unsigned int n); +unsigned int copy_from_unsafe_ll(void *to, const void *from, unsigned int n); extern long __get_user_bad(void); extern void __put_user_bad(void); +#define UA_KEEP(args...) args +#define UA_DROP(args...) + /** * get_user: - Get a simple variable from user space. * @x: Variable to store result. @@ -77,7 +83,6 @@ extern void __put_user_bad(void); * On error, the variable @x is set to zero. */ #define __get_guest(x, ptr) get_guest_nocheck(x, ptr, sizeof(*(ptr))) -#define get_unsafe __get_guest /** * __put_guest: - Write a simple value into guest space, with less checking. @@ -98,7 +103,13 @@ extern void __put_user_bad(void); */ #define __put_guest(x, ptr) \ put_guest_nocheck((__typeof__(*(ptr)))(x), ptr, sizeof(*(ptr))) -#define put_unsafe __put_guest + +#define put_unsafe(x, ptr) \ +({ \ + int err_; \ + put_unsafe_size(x, ptr, sizeof(*(ptr)), UA_DROP, err_, -EFAULT);\ + err_; \ +}) #define put_guest_nocheck(x, ptr, size) \ ({ \ @@ -115,6 +126,13 @@ extern void __put_user_bad(void); : -EFAULT; \ }) +#define get_unsafe(x, ptr) \ +({ \ + int err_; \ + get_unsafe_size(x, ptr, sizeof(*(ptr)), UA_DROP, err_, -EFAULT);\ + err_; \ +}) + #define get_guest_nocheck(x, ptr, size) \ ({ \ int err_; \ @@ -138,62 +156,87 @@ struct __large_struct { unsigned long bu * we do not write to any memory gcc knows about, so there are no * aliasing issues. */ -#define put_unsafe_asm(x, addr, err, itype, rtype, ltype, errret) \ +#define put_unsafe_asm(x, addr, GUARD, err, itype, rtype, ltype, errret) \ stac(); \ __asm__ __volatile__( \ - "1: mov"itype" %"rtype"1,%2\n" \ + GUARD( \ + " guest_access_mask_ptr %[ptr], %[scr1], %[scr2]\n" \ + ) \ + "1: mov"itype" %"rtype"[val], (%[ptr])\n" \ "2:\n" \ ".section .fixup,\"ax\"\n" \ - "3: mov %3,%0\n" \ + "3: mov %[errno], %[ret]\n" \ " jmp 2b\n" \ ".previous\n" \ _ASM_EXTABLE(1b, 3b) \ - : "=r"(err) \ - : ltype (x), "m"(__m(addr)), "i"(errret), "0"(err)); \ + : [ret] "+r" (err), [ptr] "=&r" (dummy_) \ + GUARD(, [scr1] "=&r" (dummy_), [scr2] "=&r" (dummy_)) \ + : [val] ltype (x), "m" (__m(addr)), \ + "[ptr]" (addr), [errno] "i" (errret)); \ clac() -#define get_unsafe_asm(x, addr, err, itype, rtype, ltype, errret) \ +#define get_unsafe_asm(x, addr, GUARD, err, rtype, ltype, errret) \ stac(); \ __asm__ __volatile__( \ - "1: mov"itype" %2,%"rtype"1\n" \ + GUARD( \ + " guest_access_mask_ptr %[ptr], %[scr1], %[scr2]\n" \ + ) \ + "1: mov (%[ptr]), %"rtype"[val]\n" \ "2:\n" \ ".section .fixup,\"ax\"\n" \ - "3: mov %3,%0\n" \ - " xor"itype" %"rtype"1,%"rtype"1\n" \ + "3: mov %[errno], %[ret]\n" \ + " xor %k[val], %k[val]\n" \ " jmp 2b\n" \ ".previous\n" \ _ASM_EXTABLE(1b, 3b) \ - : "=r"(err), ltype (x) \ - : "m"(__m(addr)), "i"(errret), "0"(err)); \ + : [ret] "+r" (err), [val] ltype (x), \ + [ptr] "=&r" (dummy_) \ + GUARD(, [scr1] "=&r" (dummy_), [scr2] "=&r" (dummy_)) \ + : "m" (__m(addr)), "[ptr]" (addr), \ + [errno] "i" (errret)); \ clac() -#define put_unsafe_size(x, ptr, size, retval, errret) \ +#define put_unsafe_size(x, ptr, size, grd, retval, errret) \ do { \ retval = 0; \ switch ( size ) \ { \ - case 1: put_unsafe_asm(x, ptr, retval, "b", "b", "iq", errret); break; \ - case 2: put_unsafe_asm(x, ptr, retval, "w", "w", "ir", errret); break; \ - case 4: put_unsafe_asm(x, ptr, retval, "l", "k", "ir", errret); break; \ - case 8: put_unsafe_asm(x, ptr, retval, "q", "", "ir", errret); break; \ + long dummy_; \ + case 1: \ + put_unsafe_asm(x, ptr, grd, retval, "b", "b", "iq", errret); \ + break; \ + case 2: \ + put_unsafe_asm(x, ptr, grd, retval, "w", "w", "ir", errret); \ + break; \ + case 4: \ + put_unsafe_asm(x, ptr, grd, retval, "l", "k", "ir", errret); \ + break; \ + case 8: \ + put_unsafe_asm(x, ptr, grd, retval, "q", "", "ir", errret); \ + break; \ default: __put_user_bad(); \ } \ } while ( false ) -#define put_guest_size put_unsafe_size -#define get_unsafe_size(x, ptr, size, retval, errret) \ +#define put_guest_size(x, ptr, size, retval, errret) \ + put_unsafe_size(x, ptr, size, UA_KEEP, retval, errret) + +#define get_unsafe_size(x, ptr, size, grd, retval, errret) \ do { \ retval = 0; \ switch ( size ) \ { \ - case 1: get_unsafe_asm(x, ptr, retval, "b", "b", "=q", errret); break; \ - case 2: get_unsafe_asm(x, ptr, retval, "w", "w", "=r", errret); break; \ - case 4: get_unsafe_asm(x, ptr, retval, "l", "k", "=r", errret); break; \ - case 8: get_unsafe_asm(x, ptr, retval, "q", "", "=r", errret); break; \ + long dummy_; \ + case 1: get_unsafe_asm(x, ptr, grd, retval, "b", "=q", errret); break; \ + case 2: get_unsafe_asm(x, ptr, grd, retval, "w", "=r", errret); break; \ + case 4: get_unsafe_asm(x, ptr, grd, retval, "k", "=r", errret); break; \ + case 8: get_unsafe_asm(x, ptr, grd, retval, "", "=r", errret); break; \ default: __get_user_bad(); \ } \ } while ( false ) -#define get_guest_size get_unsafe_size + +#define get_guest_size(x, ptr, size, retval, errret) \ + get_unsafe_size(x, ptr, size, UA_KEEP, retval, errret) /** * __copy_to_guest_pv: - Copy a block of data into guest space, with less @@ -229,9 +272,8 @@ __copy_to_guest_pv(void __user *to, cons return ret; } } - return __copy_to_user_ll(to, from, n); + return copy_to_guest_ll(to, from, n); } -#define copy_to_unsafe __copy_to_guest_pv /** * __copy_from_guest_pv: - Copy a block of data from guest space, with less @@ -270,9 +312,87 @@ __copy_from_guest_pv(void *to, const voi return ret; } } - return __copy_from_user_ll(to, from, n); + return copy_from_guest_ll(to, from, n); +} + +/** + * copy_to_unsafe: - Copy a block of data to unsafe space, with exception + * checking + * @to: Unsafe destination address. + * @from: Safe source address, in hypervisor space. + * @n: Number of bytes to copy. + * + * Copy data from hypervisor space to a potentially unmapped area. + * + * Returns number of bytes that could not be copied. + * On success, this will be zero. + */ +static always_inline unsigned int +copy_to_unsafe(void __user *to, const void *from, unsigned int n) +{ + if (__builtin_constant_p(n)) { + unsigned long ret; + + switch (n) { + case 1: + put_unsafe_size(*(const uint8_t *)from, to, 1, UA_DROP, ret, 1); + return ret; + case 2: + put_unsafe_size(*(const uint16_t *)from, to, 2, UA_DROP, ret, 2); + return ret; + case 4: + put_unsafe_size(*(const uint32_t *)from, to, 4, UA_DROP, ret, 4); + return ret; + case 8: + put_unsafe_size(*(const uint64_t *)from, to, 8, UA_DROP, ret, 8); + return ret; + } + } + + return copy_to_unsafe_ll(to, from, n); +} + +/** + * copy_from_unsafe: - Copy a block of data from unsafe space, with exception + * checking + * @to: Safe destination address, in hypervisor space. + * @from: Unsafe source address. + * @n: Number of bytes to copy. + * + * Copy data from a potentially unmapped area space to hypervisor space. + * + * Returns number of bytes that could not be copied. + * On success, this will be zero. + * + * If some data could not be copied, this function will pad the copied + * data to the requested size using zero bytes. + */ +static always_inline unsigned int +copy_from_unsafe(void *to, const void __user *from, unsigned int n) +{ + if ( __builtin_constant_p(n) ) + { + unsigned long ret; + + switch ( n ) + { + case 1: + get_unsafe_size(*(uint8_t *)to, from, 1, UA_DROP, ret, 1); + return ret; + case 2: + get_unsafe_size(*(uint16_t *)to, from, 2, UA_DROP, ret, 2); + return ret; + case 4: + get_unsafe_size(*(uint32_t *)to, from, 4, UA_DROP, ret, 4); + return ret; + case 8: + get_unsafe_size(*(uint64_t *)to, from, 8, UA_DROP, ret, 8); + return ret; + } + } + + return copy_from_unsafe_ll(to, from, n); } -#define copy_from_unsafe __copy_from_guest_pv /* * The exception table consists of pairs of addresses: the first is the From patchwork Wed Feb 17 08:21:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 12091191 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09DCCC433DB for ; Wed, 17 Feb 2021 08:21:25 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9285A64DEC for ; Wed, 17 Feb 2021 08:21:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9285A64DEC Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.86151.161405 (Exim 4.92) (envelope-from ) id 1lCI57-0002dj-Mt; Wed, 17 Feb 2021 08:21:09 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 86151.161405; Wed, 17 Feb 2021 08:21:09 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lCI57-0002db-Jb; Wed, 17 Feb 2021 08:21:09 +0000 Received: by outflank-mailman (input) for mailman id 86151; Wed, 17 Feb 2021 08:21:08 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lCI56-0002dG-Br for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 08:21:08 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id afef7590-768b-42c9-85ef-2e49d1957938; Wed, 17 Feb 2021 08:21:07 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 87787B923; Wed, 17 Feb 2021 08:21:06 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: afef7590-768b-42c9-85ef-2e49d1957938 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1613550066; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3edcQpzCvtggucCwlX/TE5E+DYPVVTCrsewvr6Dlet4=; b=dkGdA3tJG+aPkmuo+k6QFgN/dEnqd2ZCh0vGibhGhvDsSFMIErrY//eNUWGjYe/FPR4XiU 0eyLqH2eeV3S7tqvy+unl44I3LgjRNmMP+CdBZzAg806Q/6W1N4Ln60wgbRSSNHHVpCxO1 ch20dNVPzH7FIbsqoAFKndpPChCEh78= Subject: [PATCH v2 4/8] x86: rename {get,put}_user() to {get,put}_guest() From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , George Dunlap , Ian Jackson References: Message-ID: <369ae5ec-ee2a-78d4-438f-b18d04c81c4c@suse.com> Date: Wed, 17 Feb 2021 09:21:05 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.7.1 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US Bring them (back) in line with __{get,put}_guest(). Signed-off-by: Jan Beulich Acked-by: Roger Pau Monné --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -1649,19 +1649,19 @@ static void load_segments(struct vcpu *n if ( !ring_1(regs) ) { - ret = put_user(regs->ss, esp-1); - ret |= put_user(regs->esp, esp-2); + ret = put_guest(regs->ss, esp - 1); + ret |= put_guest(regs->esp, esp - 2); esp -= 2; } if ( ret | - put_user(rflags, esp-1) | - put_user(cs_and_mask, esp-2) | - put_user(regs->eip, esp-3) | - put_user(uregs->gs, esp-4) | - put_user(uregs->fs, esp-5) | - put_user(uregs->es, esp-6) | - put_user(uregs->ds, esp-7) ) + put_guest(rflags, esp - 1) | + put_guest(cs_and_mask, esp - 2) | + put_guest(regs->eip, esp - 3) | + put_guest(uregs->gs, esp - 4) | + put_guest(uregs->fs, esp - 5) | + put_guest(uregs->es, esp - 6) | + put_guest(uregs->ds, esp - 7) ) { gprintk(XENLOG_ERR, "error while creating compat failsafe callback frame\n"); @@ -1690,17 +1690,17 @@ static void load_segments(struct vcpu *n cs_and_mask = (unsigned long)regs->cs | ((unsigned long)vcpu_info(n, evtchn_upcall_mask) << 32); - if ( put_user(regs->ss, rsp- 1) | - put_user(regs->rsp, rsp- 2) | - put_user(rflags, rsp- 3) | - put_user(cs_and_mask, rsp- 4) | - put_user(regs->rip, rsp- 5) | - put_user(uregs->gs, rsp- 6) | - put_user(uregs->fs, rsp- 7) | - put_user(uregs->es, rsp- 8) | - put_user(uregs->ds, rsp- 9) | - put_user(regs->r11, rsp-10) | - put_user(regs->rcx, rsp-11) ) + if ( put_guest(regs->ss, rsp - 1) | + put_guest(regs->rsp, rsp - 2) | + put_guest(rflags, rsp - 3) | + put_guest(cs_and_mask, rsp - 4) | + put_guest(regs->rip, rsp - 5) | + put_guest(uregs->gs, rsp - 6) | + put_guest(uregs->fs, rsp - 7) | + put_guest(uregs->es, rsp - 8) | + put_guest(uregs->ds, rsp - 9) | + put_guest(regs->r11, rsp - 10) | + put_guest(regs->rcx, rsp - 11) ) { gprintk(XENLOG_ERR, "error while creating failsafe callback frame\n"); --- a/xen/include/asm-x86/uaccess.h +++ b/xen/include/asm-x86/uaccess.h @@ -26,14 +26,12 @@ extern void __put_user_bad(void); #define UA_DROP(args...) /** - * get_user: - Get a simple variable from user space. + * get_guest: - Get a simple variable from guest space. * @x: Variable to store result. - * @ptr: Source address, in user space. - * - * Context: User context only. This function may sleep. + * @ptr: Source address, in guest space. * - * This macro copies a single simple variable from user space to kernel - * space. It supports simple types like char and int, but not larger + * This macro load a single simple variable from guest space. + * It supports simple types like char and int, but not larger * data types like structures or arrays. * * @ptr must have pointer-to-simple-variable type, and the result of @@ -42,18 +40,15 @@ extern void __put_user_bad(void); * Returns zero on success, or -EFAULT on error. * On error, the variable @x is set to zero. */ -#define get_user(x,ptr) \ - __get_user_check((x),(ptr),sizeof(*(ptr))) +#define get_guest(x, ptr) get_guest_check(x, ptr, sizeof(*(ptr))) /** - * put_user: - Write a simple value into user space. - * @x: Value to copy to user space. - * @ptr: Destination address, in user space. - * - * Context: User context only. This function may sleep. + * put_guest: - Write a simple value into guest space. + * @x: Value to store in guest space. + * @ptr: Destination address, in guest space. * - * This macro copies a single simple value from kernel space to user - * space. It supports simple types like char and int, but not larger + * This macro stores a single simple value from to guest space. + * It supports simple types like char and int, but not larger * data types like structures or arrays. * * @ptr must have pointer-to-simple-variable type, and @x must be assignable @@ -61,8 +56,8 @@ extern void __put_user_bad(void); * * Returns zero on success, or -EFAULT on error. */ -#define put_user(x,ptr) \ - __put_user_check((__typeof__(*(ptr)))(x),(ptr),sizeof(*(ptr))) +#define put_guest(x, ptr) \ + put_guest_check((__typeof__(*(ptr)))(x), ptr, sizeof(*(ptr))) /** * __get_guest: - Get a simple variable from guest space, with less checking. @@ -118,7 +113,7 @@ extern void __put_user_bad(void); err_; \ }) -#define __put_user_check(x, ptr, size) \ +#define put_guest_check(x, ptr, size) \ ({ \ __typeof__(*(ptr)) __user *ptr_ = (ptr); \ __typeof__(size) size_ = (size); \ @@ -140,7 +135,7 @@ extern void __put_user_bad(void); err_; \ }) -#define __get_user_check(x, ptr, size) \ +#define get_guest_check(x, ptr, size) \ ({ \ __typeof__(*(ptr)) __user *ptr_ = (ptr); \ __typeof__(size) size_ = (size); \ From patchwork Wed Feb 17 08:21:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 12091195 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8ECAC433DB for ; Wed, 17 Feb 2021 08:21:51 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7352964DFF for ; Wed, 17 Feb 2021 08:21:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7352964DFF Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.86154.161416 (Exim 4.92) (envelope-from ) id 1lCI5c-0002lz-0X; Wed, 17 Feb 2021 08:21:40 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 86154.161416; Wed, 17 Feb 2021 08:21:39 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lCI5b-0002lr-Tb; Wed, 17 Feb 2021 08:21:39 +0000 Received: by outflank-mailman (input) for mailman id 86154; Wed, 17 Feb 2021 08:21:39 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lCI5a-0002le-Vw for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 08:21:39 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 2974b1bc-d770-4709-90fe-f662433d92f3; Wed, 17 Feb 2021 08:21:38 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 1E21FB926; Wed, 17 Feb 2021 08:21:37 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2974b1bc-d770-4709-90fe-f662433d92f3 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1613550097; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7+4q+ReB9EfmNerxV8ac8BV2+CtbLgs3ung/R2kapNU=; b=k7iTE3VpfSst3expp3Wj8nbpoeGkALM+7WFo8YAO7VueCMHke1xnA/jbuLnXQfd5sAMQTE VXD6YarR22lw7kIZEnqcim9pKclepIlB33ZdGye2w47jeRMMyXeNR/LuviR8eKaGGtFMeX VJefGqYJ5OenTl04gnAQrZrHrhnyfA0= Subject: [PATCH v2 5/8] x86/gdbsx: convert "user" to "guest" accesses From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , George Dunlap , Ian Jackson References: Message-ID: Date: Wed, 17 Feb 2021 09:21:36 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.7.1 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US Using copy_{from,to}_user(), this code was assuming to be called only by PV guests. Use copy_{from,to}_guest() instead, transforming the incoming structure field into a guest handle (the field should really have been one in the first place). Also do not transform the debuggee address into a pointer. Signed-off-by: Jan Beulich Acked-by: Roger Pau Monné --- v2: Re-base (bug fix side effect was taken care of already). --- a/xen/arch/x86/debug.c +++ b/xen/arch/x86/debug.c @@ -108,12 +108,11 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct doma } /* Returns: number of bytes remaining to be copied */ -static unsigned int dbg_rw_guest_mem(struct domain *dp, void * __user gaddr, - void * __user buf, unsigned int len, - bool toaddr, uint64_t pgd3) +static unsigned int dbg_rw_guest_mem(struct domain *dp, unsigned long addr, + XEN_GUEST_HANDLE_PARAM(void) buf, + unsigned int len, bool toaddr, + uint64_t pgd3) { - unsigned long addr = (unsigned long)gaddr; - while ( len > 0 ) { char *va; @@ -134,20 +133,18 @@ static unsigned int dbg_rw_guest_mem(str if ( toaddr ) { - copy_from_user(va, buf, pagecnt); /* va = buf */ + copy_from_guest(va, buf, pagecnt); paging_mark_dirty(dp, mfn); } else - { - copy_to_user(buf, va, pagecnt); /* buf = va */ - } + copy_to_guest(buf, va, pagecnt); unmap_domain_page(va); if ( !gfn_eq(gfn, INVALID_GFN) ) put_gfn(dp, gfn_x(gfn)); addr += pagecnt; - buf += pagecnt; + guest_handle_add_offset(buf, pagecnt); len -= pagecnt; } @@ -161,7 +158,7 @@ static unsigned int dbg_rw_guest_mem(str * pgd3: value of init_mm.pgd[3] in guest. see above. * Returns: number of bytes remaining to be copied. */ -unsigned int dbg_rw_mem(void * __user addr, void * __user buf, +unsigned int dbg_rw_mem(unsigned long gva, XEN_GUEST_HANDLE_PARAM(void) buf, unsigned int len, domid_t domid, bool toaddr, uint64_t pgd3) { @@ -170,7 +167,7 @@ unsigned int dbg_rw_mem(void * __user ad if ( d ) { if ( !d->is_dying ) - len = dbg_rw_guest_mem(d, addr, buf, len, toaddr, pgd3); + len = dbg_rw_guest_mem(d, gva, buf, len, toaddr, pgd3); rcu_unlock_domain(d); } --- a/xen/arch/x86/domctl.c +++ b/xen/arch/x86/domctl.c @@ -40,10 +40,8 @@ #ifdef CONFIG_GDBSX static int gdbsx_guest_mem_io(domid_t domid, struct xen_domctl_gdbsx_memio *iop) { - void * __user gva = (void *)iop->gva, * __user uva = (void *)iop->uva; - - iop->remain = dbg_rw_mem(gva, uva, iop->len, domid, - !!iop->gwr, iop->pgd3val); + iop->remain = dbg_rw_mem(iop->gva, guest_handle_from_ptr(iop->uva, void), + iop->len, domid, iop->gwr, iop->pgd3val); return iop->remain ? -EFAULT : 0; } --- a/xen/include/asm-x86/debugger.h +++ b/xen/include/asm-x86/debugger.h @@ -93,9 +93,9 @@ static inline bool debugger_trap_entry( #endif #ifdef CONFIG_GDBSX -unsigned int dbg_rw_mem(void * __user addr, void * __user buf, +unsigned int dbg_rw_mem(unsigned long gva, XEN_GUEST_HANDLE_PARAM(void) buf, unsigned int len, domid_t domid, bool toaddr, - uint64_t pgd3); + unsigned long pgd3); #endif #endif /* __X86_DEBUGGER_H__ */ From patchwork Wed Feb 17 08:22:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 12091197 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27475C433E0 for ; Wed, 17 Feb 2021 08:22:53 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A13EA64D9A for ; Wed, 17 Feb 2021 08:22:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A13EA64D9A Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.86156.161429 (Exim 4.92) (envelope-from ) id 1lCI6X-0002vB-FR; Wed, 17 Feb 2021 08:22:37 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 86156.161429; Wed, 17 Feb 2021 08:22:37 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lCI6X-0002v4-C9; Wed, 17 Feb 2021 08:22:37 +0000 Received: by outflank-mailman (input) for mailman id 86156; Wed, 17 Feb 2021 08:22:35 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lCI6V-0002up-62 for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 08:22:35 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id f628108d-e73d-42dc-8205-5d6795168574; Wed, 17 Feb 2021 08:22:34 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 4B379B923; Wed, 17 Feb 2021 08:22:33 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: f628108d-e73d-42dc-8205-5d6795168574 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1613550153; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PqIHI9Fdsb5Eyg+96N6bfOyORZFtSfdfSsVAnEtdCGg=; b=O1fAS4TW5yxmzbIzQt4k6x8gcZSRp65JcaxTSNbgq5oEMMjRM8CEnW7S2/eOi5FrOiZvt+ i/uorsRoOcsBGMsOEzHQuddvunLhrEL5TSYMIOWQVI5pFdAllb6ZBhV0MZ/oWrrZgA3k8a 5r6F8TuWU8eebNdGsOcR7zxKohMgdHo= Subject: [PATCH v2 6/8] x86: rename copy_{from,to}_user() to copy_{from,to}_guest_pv() From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , George Dunlap , Ian Jackson References: Message-ID: <5104a32f-e2a1-06a5-a637-9702e4562b81@suse.com> Date: Wed, 17 Feb 2021 09:22:32 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.7.1 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US Bring them (back) in line with __copy_{from,to}_guest_pv(). Since it falls in the same group, also convert clear_user(). Instead of adjusting __raw_clear_guest(), drop it - it's unused and would require a non- checking __clear_guest_pv() which we don't have. Add previously missing __user at some call sites and in the function declarations. Signed-off-by: Jan Beulich Reviewed-by: Roger Pau Monné --- a/xen/arch/x86/pv/emul-inv-op.c +++ b/xen/arch/x86/pv/emul-inv-op.c @@ -33,7 +33,7 @@ static int emulate_forced_invalid_op(str eip = regs->rip; /* Check for forced emulation signature: ud2 ; .ascii "xen". */ - if ( (rc = copy_from_user(sig, (char *)eip, sizeof(sig))) != 0 ) + if ( (rc = copy_from_guest_pv(sig, (char __user *)eip, sizeof(sig))) != 0 ) { pv_inject_page_fault(0, eip + sizeof(sig) - rc); return EXCRET_fault_fixed; @@ -43,7 +43,8 @@ static int emulate_forced_invalid_op(str eip += sizeof(sig); /* We only emulate CPUID. */ - if ( ( rc = copy_from_user(instr, (char *)eip, sizeof(instr))) != 0 ) + if ( (rc = copy_from_guest_pv(instr, (char __user *)eip, + sizeof(instr))) != 0 ) { pv_inject_page_fault(0, eip + sizeof(instr) - rc); return EXCRET_fault_fixed; --- a/xen/arch/x86/pv/iret.c +++ b/xen/arch/x86/pv/iret.c @@ -54,8 +54,8 @@ unsigned long do_iret(void) struct iret_context iret_saved; struct vcpu *v = current; - if ( unlikely(copy_from_user(&iret_saved, (void *)regs->rsp, - sizeof(iret_saved))) ) + if ( unlikely(copy_from_guest_pv(&iret_saved, (void __user *)regs->rsp, + sizeof(iret_saved))) ) { gprintk(XENLOG_ERR, "Fault while reading IRET context from guest stack\n"); --- a/xen/arch/x86/pv/ro-page-fault.c +++ b/xen/arch/x86/pv/ro-page-fault.c @@ -90,7 +90,8 @@ static int ptwr_emulated_update(unsigned /* Align address; read full word. */ addr &= ~(sizeof(full) - 1); - if ( (rc = copy_from_user(&full, (void *)addr, sizeof(full))) != 0 ) + if ( (rc = copy_from_guest_pv(&full, (void __user *)addr, + sizeof(full))) != 0 ) { x86_emul_pagefault(0, /* Read fault. */ addr + sizeof(full) - rc, --- a/xen/arch/x86/usercopy.c +++ b/xen/arch/x86/usercopy.c @@ -109,19 +109,17 @@ unsigned int copy_from_guest_ll(void *to #if GUARD(1) + 0 /** - * copy_to_user: - Copy a block of data into user space. - * @to: Destination address, in user space. - * @from: Source address, in kernel space. + * copy_to_guest_pv: - Copy a block of data into guest space. + * @to: Destination address, in guest space. + * @from: Source address, in hypervisor space. * @n: Number of bytes to copy. * - * Context: User context only. This function may sleep. - * - * Copy data from kernel space to user space. + * Copy data from hypervisor space to guest space. * * Returns number of bytes that could not be copied. * On success, this will be zero. */ -unsigned copy_to_user(void __user *to, const void *from, unsigned n) +unsigned int copy_to_guest_pv(void __user *to, const void *from, unsigned int n) { if ( access_ok(to, n) ) n = __copy_to_guest_pv(to, from, n); @@ -129,16 +127,16 @@ unsigned copy_to_user(void __user *to, c } /** - * clear_user: - Zero a block of memory in user space. - * @to: Destination address, in user space. + * clear_guest_pv: - Zero a block of memory in guest space. + * @to: Destination address, in guest space. * @n: Number of bytes to zero. * - * Zero a block of memory in user space. + * Zero a block of memory in guest space. * * Returns number of bytes that could not be cleared. * On success, this will be zero. */ -unsigned clear_user(void __user *to, unsigned n) +unsigned int clear_guest_pv(void __user *to, unsigned int n) { if ( access_ok(to, n) ) { @@ -168,14 +166,12 @@ unsigned clear_user(void __user *to, uns } /** - * copy_from_user: - Copy a block of data from user space. - * @to: Destination address, in kernel space. - * @from: Source address, in user space. + * copy_from_guest_pv: - Copy a block of data from guest space. + * @to: Destination address, in hypervisor space. + * @from: Source address, in guest space. * @n: Number of bytes to copy. * - * Context: User context only. This function may sleep. - * - * Copy data from user space to kernel space. + * Copy data from guest space to hypervisor space. * * Returns number of bytes that could not be copied. * On success, this will be zero. @@ -183,7 +179,8 @@ unsigned clear_user(void __user *to, uns * If some data could not be copied, this function will pad the copied * data to the requested size using zero bytes. */ -unsigned copy_from_user(void *to, const void __user *from, unsigned n) +unsigned int copy_from_guest_pv(void *to, const void __user *from, + unsigned int n) { if ( access_ok(from, n) ) n = __copy_from_guest_pv(to, from, n); --- a/xen/include/asm-x86/guest_access.h +++ b/xen/include/asm-x86/guest_access.h @@ -16,15 +16,15 @@ #define raw_copy_to_guest(dst, src, len) \ (is_hvm_vcpu(current) ? \ copy_to_user_hvm((dst), (src), (len)) : \ - copy_to_user((dst), (src), (len))) + copy_to_guest_pv(dst, src, len)) #define raw_copy_from_guest(dst, src, len) \ (is_hvm_vcpu(current) ? \ copy_from_user_hvm((dst), (src), (len)) : \ - copy_from_user((dst), (src), (len))) + copy_from_guest_pv(dst, src, len)) #define raw_clear_guest(dst, len) \ (is_hvm_vcpu(current) ? \ clear_user_hvm((dst), (len)) : \ - clear_user((dst), (len))) + clear_guest_pv(dst, len)) #define __raw_copy_to_guest(dst, src, len) \ (is_hvm_vcpu(current) ? \ copy_to_user_hvm((dst), (src), (len)) : \ @@ -33,10 +33,6 @@ (is_hvm_vcpu(current) ? \ copy_from_user_hvm((dst), (src), (len)) : \ __copy_from_guest_pv(dst, src, len)) -#define __raw_clear_guest(dst, len) \ - (is_hvm_vcpu(current) ? \ - clear_user_hvm((dst), (len)) : \ - clear_user((dst), (len))) /* * Pre-validate a guest handle. --- a/xen/include/asm-x86/uaccess.h +++ b/xen/include/asm-x86/uaccess.h @@ -9,9 +9,11 @@ #include -unsigned copy_to_user(void *to, const void *from, unsigned len); -unsigned clear_user(void *to, unsigned len); -unsigned copy_from_user(void *to, const void *from, unsigned len); +unsigned int copy_to_guest_pv(void __user *to, const void *from, + unsigned int len); +unsigned int clear_guest_pv(void __user *to, unsigned int len); +unsigned int copy_from_guest_pv(void *to, const void __user *from, + unsigned int len); /* Handles exceptions in both to and from, but doesn't do access_ok */ unsigned int copy_to_guest_ll(void __user*to, const void *from, unsigned int n); From patchwork Wed Feb 17 08:22:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 12091199 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.4 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, UNWANTED_LANGUAGE_BODY,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1DAABC433E0 for ; Wed, 17 Feb 2021 08:23:15 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AAAAB64D9A for ; Wed, 17 Feb 2021 08:23:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AAAAB64D9A Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.86157.161441 (Exim 4.92) (envelope-from ) id 1lCI6x-00030v-PB; Wed, 17 Feb 2021 08:23:03 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 86157.161441; Wed, 17 Feb 2021 08:23:03 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lCI6x-00030o-MB; Wed, 17 Feb 2021 08:23:03 +0000 Received: by outflank-mailman (input) for mailman id 86157; Wed, 17 Feb 2021 08:23:02 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lCI6w-00030g-DB for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 08:23:02 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 4f2ca825-4c10-4e89-ac6c-415e0ff5ded6; Wed, 17 Feb 2021 08:23:01 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 35FB9B926; Wed, 17 Feb 2021 08:23:00 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 4f2ca825-4c10-4e89-ac6c-415e0ff5ded6 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1613550180; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=q8/6WRQIWFwJzkQJIPD65ozYOHWLMe51rYFHqRcSohA=; b=ZUi7cArqJrGBozceb78RVr3NSFRmzNXE1MiWZN/tiPbvDbJsMSawfDawf6oIOIoacT4LXd E3ZJcL5xvWEiBTO0Hr+lQ6s82NOe44lDL/2osqH3yIpDg3cwGDXaNgA+3aaJ9E45TrtThL iIVtIZn+iB0/v3leNVtwXRHE0+X0CiQ= Subject: [PATCH v2 7/8] x86: move stac()/clac() from {get,put}_unsafe_asm() ... From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , George Dunlap , Ian Jackson References: Message-ID: Date: Wed, 17 Feb 2021 09:22:59 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.7.1 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US ... to {get,put}_unsafe_size(). There's no need to have the macros expanded once per case label in the latter. This also makes the former well-formed single statements again. No change in generated code. Signed-off-by: Jan Beulich Reviewed-by: Roger Pau Monné --- a/xen/include/asm-x86/uaccess.h +++ b/xen/include/asm-x86/uaccess.h @@ -154,7 +154,6 @@ struct __large_struct { unsigned long bu * aliasing issues. */ #define put_unsafe_asm(x, addr, GUARD, err, itype, rtype, ltype, errret) \ - stac(); \ __asm__ __volatile__( \ GUARD( \ " guest_access_mask_ptr %[ptr], %[scr1], %[scr2]\n" \ @@ -169,11 +168,9 @@ struct __large_struct { unsigned long bu : [ret] "+r" (err), [ptr] "=&r" (dummy_) \ GUARD(, [scr1] "=&r" (dummy_), [scr2] "=&r" (dummy_)) \ : [val] ltype (x), "m" (__m(addr)), \ - "[ptr]" (addr), [errno] "i" (errret)); \ - clac() + "[ptr]" (addr), [errno] "i" (errret)) #define get_unsafe_asm(x, addr, GUARD, err, rtype, ltype, errret) \ - stac(); \ __asm__ __volatile__( \ GUARD( \ " guest_access_mask_ptr %[ptr], %[scr1], %[scr2]\n" \ @@ -190,12 +187,12 @@ struct __large_struct { unsigned long bu [ptr] "=&r" (dummy_) \ GUARD(, [scr1] "=&r" (dummy_), [scr2] "=&r" (dummy_)) \ : "m" (__m(addr)), "[ptr]" (addr), \ - [errno] "i" (errret)); \ - clac() + [errno] "i" (errret)) #define put_unsafe_size(x, ptr, size, grd, retval, errret) \ do { \ retval = 0; \ + stac(); \ switch ( size ) \ { \ long dummy_; \ @@ -213,6 +210,7 @@ do { break; \ default: __put_user_bad(); \ } \ + clac(); \ } while ( false ) #define put_guest_size(x, ptr, size, retval, errret) \ @@ -221,6 +219,7 @@ do { #define get_unsafe_size(x, ptr, size, grd, retval, errret) \ do { \ retval = 0; \ + stac(); \ switch ( size ) \ { \ long dummy_; \ @@ -230,6 +229,7 @@ do { case 8: get_unsafe_asm(x, ptr, grd, retval, "", "=r", errret); break; \ default: __get_user_bad(); \ } \ + clac(); \ } while ( false ) #define get_guest_size(x, ptr, size, retval, errret) \ From patchwork Wed Feb 17 08:23:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 12091201 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A83FC433E0 for ; Wed, 17 Feb 2021 08:23:48 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DDF0164DEC for ; Wed, 17 Feb 2021 08:23:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DDF0164DEC Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.86161.161453 (Exim 4.92) (envelope-from ) id 1lCI7V-00037u-3B; Wed, 17 Feb 2021 08:23:37 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 86161.161453; Wed, 17 Feb 2021 08:23:37 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lCI7U-00037n-WC; Wed, 17 Feb 2021 08:23:36 +0000 Received: by outflank-mailman (input) for mailman id 86161; Wed, 17 Feb 2021 08:23:35 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lCI7T-00037d-Sc for xen-devel@lists.xenproject.org; Wed, 17 Feb 2021 08:23:35 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 23ef8b20-fa74-49e5-bbfa-948cd9c0f6af; Wed, 17 Feb 2021 08:23:35 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 726B3B926; Wed, 17 Feb 2021 08:23:34 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 23ef8b20-fa74-49e5-bbfa-948cd9c0f6af X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1613550214; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=e67EQSav/B2t339nJdJXrH2VD3Eb4kam/ZfJ4x2oFeo=; b=DzUrPXTx9dh4UaCf0KdD/Xf2NdXoreX1z0q9cXNSYlhaT8AE/rbrEMsLr9zxlchRMblcz0 Azj2XdComQn+sFem/DcXTFLI6h7M483P9EVDXZzDY1RuknXQjld7wRltXiNtXfY90amog7 GEAH2FEp3t2FNi3eeJGqEIRt2V4K9ao= Subject: [PATCH v2 8/8] x86/PV: use get_unsafe() instead of copy_from_unsafe() From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , George Dunlap , Ian Jackson References: Message-ID: <0a59ae2f-448e-610d-e8a2-a7c3f9f3918f@suse.com> Date: Wed, 17 Feb 2021 09:23:33 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.7.1 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US The former expands to a single (memory accessing) insn, which the latter does not guarantee. Yet we'd prefer to read consistent PTEs rather than risking a split read racing with an update done elsewhere. Signed-off-by: Jan Beulich Reviewed-by: Roger Pau Monné --- a/xen/arch/x86/pv/mm.c +++ b/xen/arch/x86/pv/mm.c @@ -41,9 +41,7 @@ l1_pgentry_t *map_guest_l1e(unsigned lon return NULL; /* Find this l1e and its enclosing l1mfn in the linear map. */ - if ( copy_from_unsafe(&l2e, - &__linear_l2_table[l2_linear_offset(linear)], - sizeof(l2_pgentry_t)) ) + if ( get_unsafe(l2e, &__linear_l2_table[l2_linear_offset(linear)]) ) return NULL; /* Check flags that it will be safe to read the l1e. */ --- a/xen/arch/x86/pv/mm.h +++ b/xen/arch/x86/pv/mm.h @@ -22,9 +22,7 @@ static inline l1_pgentry_t guest_get_eff toggle_guest_pt(curr); if ( unlikely(!__addr_ok(linear)) || - copy_from_unsafe(&l1e, - &__linear_l1_table[l1_linear_offset(linear)], - sizeof(l1_pgentry_t)) ) + get_unsafe(l1e, &__linear_l1_table[l1_linear_offset(linear)]) ) l1e = l1e_empty(); if ( user_mode )