From patchwork Fri Apr 12 08:42:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 13627178 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 1CEFD4A99C; Fri, 12 Apr 2024 08:42:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712911362; cv=none; b=BxaDZPakbY3GQ0LmLXYgYzm0lgjP5gwnfgfdn1E3MjgIul0+eZOhu/su/gkR6+Li4O1eZ3QzvqSH8SY4OHNU+wZfNyObm7LLJtKyim1jTETGE94sTMf5C6GdHFsLtACI1WrAoSU/2nvjiVussoR2DYwZses1jBLE1FCPwhPULs0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712911362; c=relaxed/simple; bh=vusBqg8J9e2zgFoPLiI0Ud8E+avtD985TjvvarLHPgI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Qz9m1Oo5JjXfBuwpixU0R1MWlEA+yCg4Bpwrp+Ji7w5ikvVxky84kJ/LtHvjtD/PL95L+NNlcbaKhcq2gLFkT2Gxh+Mg0L6WTiBm3E2qw87d9mDd+I5PgYEE7w65EfU2N52MtZlMO9ktIXkT7yIK1EFhCr1z3l5tOmjh2PlNpzk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C66C3113E; Fri, 12 Apr 2024 01:43:09 -0700 (PDT) Received: from e112269-lin.cambridge.arm.com (e112269-lin.cambridge.arm.com [10.1.194.51]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6F1AE3F6C4; Fri, 12 Apr 2024 01:42:38 -0700 (PDT) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni Subject: [PATCH v2 08/14] arm64: Enforce bounce buffers for realm DMA Date: Fri, 12 Apr 2024 09:42:07 +0100 Message-Id: <20240412084213.1733764-9-steven.price@arm.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240412084213.1733764-1-steven.price@arm.com> References: <20240412084056.1733704-1-steven.price@arm.com> <20240412084213.1733764-1-steven.price@arm.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Within a realm guest it's not possible for a device emulated by the VMM to access arbitrary guest memory. So force the use of bounce buffers to ensure that the memory the emulated devices are accessing is in memory which is explicitly shared with the host. Co-developed-by: Suzuki K Poulose Signed-off-by: Suzuki K Poulose Signed-off-by: Steven Price --- arch/arm64/kernel/rsi.c | 2 ++ arch/arm64/mm/init.c | 11 +++++++++-- 2 files changed, 11 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kernel/rsi.c b/arch/arm64/kernel/rsi.c index 159bc428c77b..5c8ed3aaa35f 100644 --- a/arch/arm64/kernel/rsi.c +++ b/arch/arm64/kernel/rsi.c @@ -5,6 +5,8 @@ #include #include +#include + #include struct realm_config __attribute((aligned(PAGE_SIZE))) config; diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 786fd6ce5f17..01a2e3ce6921 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -370,7 +370,9 @@ void __init bootmem_init(void) */ void __init mem_init(void) { - bool swiotlb = max_pfn > PFN_DOWN(arm64_dma_phys_limit); + bool swiotlb = (max_pfn > PFN_DOWN(arm64_dma_phys_limit)); + + swiotlb |= is_realm_world(); if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC) && !swiotlb) { /* @@ -383,7 +385,12 @@ void __init mem_init(void) swiotlb = true; } - swiotlb_init(swiotlb, SWIOTLB_VERBOSE); + if (is_realm_world()) { + swiotlb_init(swiotlb, SWIOTLB_VERBOSE | SWIOTLB_FORCE); + swiotlb_update_mem_attributes(); + } else { + swiotlb_init(swiotlb, SWIOTLB_VERBOSE); + } /* this will put all unused low memory onto the freelists */ memblock_free_all();