diff mbox series

[2/2] KVM: arm64: Redefine pKVM memory transitions in terms of source/target

Message ID 20221028083448.1998389-3-oliver.upton@linux.dev (mailing list archive)
State New, archived
Headers show
Series KVM: arm64: pKVM memory transitions cleanup | expand

Commit Message

Oliver Upton Oct. 28, 2022, 8:34 a.m. UTC
Perhaps it is just me, but the 'initiator' and 'completer' terms are
slightly confusing descriptors for the addresses involved in a memory
transition. Apply a rename to instead describe memory transitions in
terms of a source and target address.

No functional change intended.

Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
---
 arch/arm64/kvm/hyp/nvhe/mem_protect.c | 68 +++++++++++++--------------
 1 file changed, 34 insertions(+), 34 deletions(-)

Comments

Quentin Perret Oct. 28, 2022, 9:57 a.m. UTC | #1
Hey Oliver,

On Friday 28 Oct 2022 at 08:34:48 (+0000), Oliver Upton wrote:
> Perhaps it is just me, but the 'initiator' and 'completer' terms are
> slightly confusing descriptors for the addresses involved in a memory
> transition. Apply a rename to instead describe memory transitions in
> terms of a source and target address.

Just to provide some rationale for the initiator/completer terminology,
the very first implementation we did of this used 'sender/recipient (or
something along those lines I think), and we ended up confusing
ourselves massively. The main issue is that memory doesn't necessarily
'flow' in the same direction as the transition. It's all fine for a
donation or a share, but reclaim and unshare become funny. 'The
recipient of an unshare' can be easily misunderstood, I think.

So yeah, we ended up with initiator/completer, which may not be the
prettiest terminology, but it was useful to disambiguate things at
least.

Thanks for the review!
Quentin
Oliver Upton Oct. 28, 2022, 10:23 a.m. UTC | #2
Quentin,

On Fri, Oct 28, 2022 at 09:57:04AM +0000, Quentin Perret wrote:
> Hey Oliver,
> 
> On Friday 28 Oct 2022 at 08:34:48 (+0000), Oliver Upton wrote:
> > Perhaps it is just me, but the 'initiator' and 'completer' terms are
> > slightly confusing descriptors for the addresses involved in a memory
> > transition. Apply a rename to instead describe memory transitions in
> > terms of a source and target address.
> 
> Just to provide some rationale for the initiator/completer terminology,
> the very first implementation we did of this used 'sender/recipient (or
> something along those lines I think), and we ended up confusing
> ourselves massively. The main issue is that memory doesn't necessarily
> 'flow' in the same direction as the transition. It's all fine for a
> donation or a share, but reclaim and unshare become funny. 'The
> recipient of an unshare' can be easily misunderstood, I think.
> 
> So yeah, we ended up with initiator/completer, which may not be the
> prettiest terminology, but it was useful to disambiguate things at
> least.

I see, thanks for the background :) If I've managed to re-ambiguate the
language here then LMK. Frankly, I'm more strongly motivated on the
first patch anyway.

--
Thanks,
Oliver
Will Deacon Nov. 10, 2022, 10:46 a.m. UTC | #3
On Fri, Oct 28, 2022 at 10:23:36AM +0000, Oliver Upton wrote:
> On Fri, Oct 28, 2022 at 09:57:04AM +0000, Quentin Perret wrote:
> > On Friday 28 Oct 2022 at 08:34:48 (+0000), Oliver Upton wrote:
> > > Perhaps it is just me, but the 'initiator' and 'completer' terms are
> > > slightly confusing descriptors for the addresses involved in a memory
> > > transition. Apply a rename to instead describe memory transitions in
> > > terms of a source and target address.
> > 
> > Just to provide some rationale for the initiator/completer terminology,
> > the very first implementation we did of this used 'sender/recipient (or
> > something along those lines I think), and we ended up confusing
> > ourselves massively. The main issue is that memory doesn't necessarily
> > 'flow' in the same direction as the transition. It's all fine for a
> > donation or a share, but reclaim and unshare become funny. 'The
> > recipient of an unshare' can be easily misunderstood, I think.
> > 
> > So yeah, we ended up with initiator/completer, which may not be the
> > prettiest terminology, but it was useful to disambiguate things at
> > least.
> 
> I see, thanks for the background :) If I've managed to re-ambiguate the
> language here then LMK. Frankly, I'm more strongly motivated on the
> first patch anyway.

Having been previously tangled up in the confusion mentioned by Quentin, I'm
also strongly in favour of leaving the terminology as-is for the time being.
Once we have some of the more interesting memory transitions (i.e.
approaching the cross-product of host/guest/hyp/trustzone doing
share/unshare/donate) then I think we'll be in a much better position to
improve the naming, but whatever we change now is very unlikely to stick and
the patches as we have them now are at least consistent.

I replied separately on the first patch, as I don't really have a strong
opinion on that one.

Will
diff mbox series

Patch

diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
index 3636a24e1b34..3ea389a8166f 100644
--- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
+++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
@@ -391,20 +391,20 @@  struct pkvm_mem_transition {
 
 	struct {
 		enum pkvm_component_id	id;
-		/* Address in the initiator's address space */
+		/* Address in the source's address space */
 		u64			addr;
-	} initiator;
+	} source;
 
 	struct {
 		enum pkvm_component_id	id;
-		/* Address in the completer's address space */
+		/* Address in the target's address space */
 		u64			addr;
-	} completer;
+	} target;
 };
 
 struct pkvm_mem_share {
 	const struct pkvm_mem_transition	tx;
-	const enum kvm_pgtable_prot		completer_prot;
+	const enum kvm_pgtable_prot		target_prot;
 };
 
 struct check_walk_data {
@@ -469,7 +469,7 @@  static int __host_set_page_state_range(u64 addr, u64 size,
 static int host_request_owned_transition(const struct pkvm_mem_transition *tx)
 {
 	u64 size = tx->nr_pages * PAGE_SIZE;
-	u64 addr = tx->initiator.addr;
+	u64 addr = tx->source.addr;
 
 	return __host_check_page_state_range(addr, size, PKVM_PAGE_OWNED);
 }
@@ -477,7 +477,7 @@  static int host_request_owned_transition(const struct pkvm_mem_transition *tx)
 static int host_request_unshare(const struct pkvm_mem_transition *tx)
 {
 	u64 size = tx->nr_pages * PAGE_SIZE;
-	u64 addr = tx->initiator.addr;
+	u64 addr = tx->source.addr;
 
 	return __host_check_page_state_range(addr, size, PKVM_PAGE_SHARED_OWNED);
 }
@@ -485,7 +485,7 @@  static int host_request_unshare(const struct pkvm_mem_transition *tx)
 static int host_initiate_share(const struct pkvm_mem_transition *tx)
 {
 	u64 size = tx->nr_pages * PAGE_SIZE;
-	u64 addr = tx->initiator.addr;
+	u64 addr = tx->source.addr;
 
 	return __host_set_page_state_range(addr, size, PKVM_PAGE_SHARED_OWNED);
 }
@@ -493,7 +493,7 @@  static int host_initiate_share(const struct pkvm_mem_transition *tx)
 static int host_initiate_unshare(const struct pkvm_mem_transition *tx)
 {
 	u64 size = tx->nr_pages * PAGE_SIZE;
-	u64 addr = tx->initiator.addr;
+	u64 addr = tx->source.addr;
 
 	return __host_set_page_state_range(addr, size, PKVM_PAGE_OWNED);
 }
@@ -521,7 +521,7 @@  static int __hyp_check_page_state_range(u64 addr, u64 size,
 static bool __hyp_ack_skip_pgtable_check(const struct pkvm_mem_transition *tx)
 {
 	return !(IS_ENABLED(CONFIG_NVHE_EL2_DEBUG) ||
-		 tx->initiator.id != PKVM_ID_HOST);
+		 tx->source.id != PKVM_ID_HOST);
 }
 
 static int hyp_ack_share(const struct pkvm_mem_transition *tx,
@@ -535,7 +535,7 @@  static int hyp_ack_share(const struct pkvm_mem_transition *tx,
 	if (__hyp_ack_skip_pgtable_check(tx))
 		return 0;
 
-	return __hyp_check_page_state_range(tx->completer.addr, size, PKVM_NOPAGE);
+	return __hyp_check_page_state_range(tx->target.addr, size, PKVM_NOPAGE);
 }
 
 static int hyp_ack_unshare(const struct pkvm_mem_transition *tx)
@@ -545,14 +545,14 @@  static int hyp_ack_unshare(const struct pkvm_mem_transition *tx)
 	if (__hyp_ack_skip_pgtable_check(tx))
 		return 0;
 
-	return __hyp_check_page_state_range(tx->completer.addr, size,
+	return __hyp_check_page_state_range(tx->target.addr, size,
 					    PKVM_PAGE_SHARED_BORROWED);
 }
 
 static int hyp_complete_share(const struct pkvm_mem_transition *tx,
 			      enum kvm_pgtable_prot perms)
 {
-	void *start = (void *)tx->completer.addr;
+	void *start = (void *)tx->target.addr;
 	void *end = start + (tx->nr_pages * PAGE_SIZE);
 	enum kvm_pgtable_prot prot;
 
@@ -563,7 +563,7 @@  static int hyp_complete_share(const struct pkvm_mem_transition *tx,
 static int hyp_complete_unshare(const struct pkvm_mem_transition *tx)
 {
 	u64 size = tx->nr_pages * PAGE_SIZE;
-	int ret = kvm_pgtable_hyp_unmap(&pkvm_pgtable, tx->completer.addr, size);
+	int ret = kvm_pgtable_hyp_unmap(&pkvm_pgtable, tx->target.addr, size);
 
 	return (ret != size) ? -EFAULT : 0;
 }
@@ -573,7 +573,7 @@  static int check_share(struct pkvm_mem_share *share)
 	const struct pkvm_mem_transition *tx = &share->tx;
 	int ret;
 
-	switch (tx->initiator.id) {
+	switch (tx->source.id) {
 	case PKVM_ID_HOST:
 		ret = host_request_owned_transition(tx);
 		break;
@@ -584,9 +584,9 @@  static int check_share(struct pkvm_mem_share *share)
 	if (ret)
 		return ret;
 
-	switch (tx->completer.id) {
+	switch (tx->target.id) {
 	case PKVM_ID_HYP:
-		ret = hyp_ack_share(tx, share->completer_prot);
+		ret = hyp_ack_share(tx, share->target_prot);
 		break;
 	default:
 		ret = -EINVAL;
@@ -600,7 +600,7 @@  static int __do_share(struct pkvm_mem_share *share)
 	const struct pkvm_mem_transition *tx = &share->tx;
 	int ret;
 
-	switch (tx->initiator.id) {
+	switch (tx->source.id) {
 	case PKVM_ID_HOST:
 		ret = host_initiate_share(tx);
 		break;
@@ -611,9 +611,9 @@  static int __do_share(struct pkvm_mem_share *share)
 	if (ret)
 		return ret;
 
-	switch (tx->completer.id) {
+	switch (tx->target.id) {
 	case PKVM_ID_HYP:
-		ret = hyp_complete_share(tx, share->completer_prot);
+		ret = hyp_complete_share(tx, share->target_prot);
 		break;
 	default:
 		ret = -EINVAL;
@@ -628,8 +628,8 @@  static int __do_share(struct pkvm_mem_share *share)
  * The page owner grants access to another component with a given set
  * of permissions.
  *
- * Initiator: OWNED	=> SHARED_OWNED
- * Completer: NOPAGE	=> SHARED_BORROWED
+ * Source: OWNED	=> SHARED_OWNED
+ * Target: NOPAGE	=> SHARED_BORROWED
  */
 static int do_share(struct pkvm_mem_share *share)
 {
@@ -647,7 +647,7 @@  static int check_unshare(struct pkvm_mem_share *share)
 	const struct pkvm_mem_transition *tx = &share->tx;
 	int ret;
 
-	switch (tx->initiator.id) {
+	switch (tx->source.id) {
 	case PKVM_ID_HOST:
 		ret = host_request_unshare(tx);
 		break;
@@ -658,7 +658,7 @@  static int check_unshare(struct pkvm_mem_share *share)
 	if (ret)
 		return ret;
 
-	switch (tx->completer.id) {
+	switch (tx->target.id) {
 	case PKVM_ID_HYP:
 		ret = hyp_ack_unshare(tx);
 		break;
@@ -674,7 +674,7 @@  static int __do_unshare(struct pkvm_mem_share *share)
 	const struct pkvm_mem_transition *tx = &share->tx;
 	int ret;
 
-	switch (tx->initiator.id) {
+	switch (tx->source.id) {
 	case PKVM_ID_HOST:
 		ret = host_initiate_unshare(tx);
 		break;
@@ -685,7 +685,7 @@  static int __do_unshare(struct pkvm_mem_share *share)
 	if (ret)
 		return ret;
 
-	switch (tx->completer.id) {
+	switch (tx->target.id) {
 	case PKVM_ID_HYP:
 		ret = hyp_complete_unshare(tx);
 		break;
@@ -702,8 +702,8 @@  static int __do_unshare(struct pkvm_mem_share *share)
  * The page owner revokes access from another component for a range of
  * pages which were previously shared using do_share().
  *
- * Initiator: SHARED_OWNED	=> OWNED
- * Completer: SHARED_BORROWED	=> NOPAGE
+ * Source: SHARED_OWNED	=> OWNED
+ * Target: SHARED_BORROWED	=> NOPAGE
  */
 static int do_unshare(struct pkvm_mem_share *share)
 {
@@ -724,16 +724,16 @@  int __pkvm_host_share_hyp(u64 pfn)
 	struct pkvm_mem_share share = {
 		.tx	= {
 			.nr_pages	= 1,
-			.initiator	= {
+			.source	= {
 				.id	= PKVM_ID_HOST,
 				.addr	= host_addr,
 			},
-			.completer	= {
+			.target	= {
 				.id	= PKVM_ID_HYP,
 				.addr	= hyp_addr,
 			},
 		},
-		.completer_prot	= PAGE_HYP,
+		.target_prot	= PAGE_HYP,
 	};
 
 	host_lock_component();
@@ -755,16 +755,16 @@  int __pkvm_host_unshare_hyp(u64 pfn)
 	struct pkvm_mem_share share = {
 		.tx	= {
 			.nr_pages	= 1,
-			.initiator	= {
+			.source	= {
 				.id	= PKVM_ID_HOST,
 				.addr	= host_addr,
 			},
-			.completer	= {
+			.target	= {
 				.id	= PKVM_ID_HYP,
 				.addr	= hyp_addr,
 			},
 		},
-		.completer_prot	= PAGE_HYP,
+		.target_prot	= PAGE_HYP,
 	};
 
 	host_lock_component();