diff mbox series

[v2,04/14] x86/mm: split set_identity_p2m_entry() into PV and HVM parts

Message ID 69623630-cda7-9b2b-4f2f-09a83d5dc22a@suse.com (mailing list archive)
State New, archived
Headers show
Series x86/mm: large parts of P2M code and struct p2m_domain are HVM-only | expand

Commit Message

Jan Beulich Feb. 23, 2022, 3:59 p.m. UTC
..., moving the former into the new physmap.c. Also call the new
functions directly from arch_iommu_hwdom_init() and
vpci_make_msix_hole(), as the PV/HVM split is explicit there.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: George Dunlap <george.dunlap@ctirix.com>
---
v2: Change arch_iommu_hwdom_init() and vpci_make_msix_hole().

Comments

George Dunlap April 1, 2022, 12:13 p.m. UTC | #1
> On Feb 23, 2022, at 3:59 PM, Jan Beulich <JBeulich@suse.com> wrote:
> 
> ..., moving the former into the new physmap.c. Also call the new
> functions directly from arch_iommu_hwdom_init() and
> vpci_make_msix_hole(), as the PV/HVM split is explicit there.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: George Dunlap <george.dunlap@ctirix.com>

“citrix” is spelled wrong in the email address — not sure whether it’s my typo or yours. :-)

> ---
> v2: Change arch_iommu_hwdom_init() and vpci_make_msix_hole().

I think the R-b should probably have been dropped for this change, but in any case:

Reviewed-by: George Dunlap <george.dunlap@citrix.com>
Jan Beulich April 6, 2022, 2:52 p.m. UTC | #2
On 23.02.2022 16:59, Jan Beulich wrote:
> ..., moving the former into the new physmap.c. Also call the new
> functions directly from arch_iommu_hwdom_init() and
> vpci_make_msix_hole(), as the PV/HVM split is explicit there.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: George Dunlap <george.dunlap@ctirix.com>

May I ask for an ack on the vPCI change here?

> --- a/xen/drivers/vpci/msix.c
> +++ b/xen/drivers/vpci/msix.c
> @@ -409,7 +409,7 @@ int vpci_make_msix_hole(const struct pci
>              case p2m_mmio_direct:
>                  if ( mfn_x(mfn) == start )
>                  {
> -                    clear_identity_p2m_entry(d, start);
> +                    p2m_remove_identity_entry(d, start);
>                      break;
>                  }
>                  /* fallthrough. */

Thanks, Jan
Roger Pau Monné April 8, 2022, 10:55 a.m. UTC | #3
On Wed, Feb 23, 2022 at 04:59:42PM +0100, Jan Beulich wrote:
> ..., moving the former into the new physmap.c. Also call the new
> functions directly from arch_iommu_hwdom_init() and
> vpci_make_msix_hole(), as the PV/HVM split is explicit there.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: George Dunlap <george.dunlap@ctirix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Just one comment below, which can also be taken care in a followup
patch if you agree.

> ---
> v2: Change arch_iommu_hwdom_init() and vpci_make_msix_hole().
> 
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -1473,12 +1473,9 @@ static int clear_mmio_p2m_entry(struct d
>      return rc;
>  }
>  
> -#endif /* CONFIG_HVM */
> -
> -int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l,
> +int p2m_add_identity_entry(struct domain *d, unsigned long gfn_l,

I guess switching the gfn_l parameter to be gfn_t gfn, and then
defining:

unsigned long gfn_l = gfn_x(gfn);

Was too much churn?

(this is just so that the exposed interface for p2m_add_identity_entry
is cleaner).

Thanks, Roger.
Jan Beulich April 8, 2022, 12:14 p.m. UTC | #4
On 08.04.2022 12:55, Roger Pau Monné wrote:
> On Wed, Feb 23, 2022 at 04:59:42PM +0100, Jan Beulich wrote:
>> ..., moving the former into the new physmap.c. Also call the new
>> functions directly from arch_iommu_hwdom_init() and
>> vpci_make_msix_hole(), as the PV/HVM split is explicit there.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Reviewed-by: George Dunlap <george.dunlap@ctirix.com>
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

> Just one comment below, which can also be taken care in a followup
> patch if you agree.
> 
>> ---
>> v2: Change arch_iommu_hwdom_init() and vpci_make_msix_hole().
>>
>> --- a/xen/arch/x86/mm/p2m.c
>> +++ b/xen/arch/x86/mm/p2m.c
>> @@ -1473,12 +1473,9 @@ static int clear_mmio_p2m_entry(struct d
>>      return rc;
>>  }
>>  
>> -#endif /* CONFIG_HVM */
>> -
>> -int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l,
>> +int p2m_add_identity_entry(struct domain *d, unsigned long gfn_l,
> 
> I guess switching the gfn_l parameter to be gfn_t gfn, and then
> defining:
> 
> unsigned long gfn_l = gfn_x(gfn);
> 
> Was too much churn?

Well, yes, I probably could have done that, but the series was (going
to be) big enough already, so I tried to stay away from such (for
consistency's sake I think I would then have needed to do the same
elsewhere as well).

Jan
diff mbox series

Patch

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1473,12 +1473,9 @@  static int clear_mmio_p2m_entry(struct d
     return rc;
 }
 
-#endif /* CONFIG_HVM */
-
-int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l,
+int p2m_add_identity_entry(struct domain *d, unsigned long gfn_l,
                            p2m_access_t p2ma, unsigned int flag)
 {
-#ifdef CONFIG_HVM
     p2m_type_t p2mt;
     p2m_access_t a;
     gfn_t gfn = _gfn(gfn_l);
@@ -1488,13 +1485,8 @@  int set_identity_p2m_entry(struct domain
 
     if ( !paging_mode_translate(d) )
     {
-#endif
-        if ( !is_iommu_enabled(d) )
-            return 0;
-        return iommu_legacy_map(d, _dfn(gfn_l), _mfn(gfn_l),
-                                1ul << PAGE_ORDER_4K,
-                                p2m_access_to_iommu_flags(p2ma));
-#ifdef CONFIG_HVM
+        ASSERT_UNREACHABLE();
+        return -EPERM;
     }
 
     gfn_lock(p2m, gfn, 0);
@@ -1520,12 +1512,10 @@  int set_identity_p2m_entry(struct domain
 
     gfn_unlock(p2m, gfn, 0);
     return ret;
-#endif
 }
 
-int clear_identity_p2m_entry(struct domain *d, unsigned long gfn_l)
+int p2m_remove_identity_entry(struct domain *d, unsigned long gfn_l)
 {
-#ifdef CONFIG_HVM
     p2m_type_t p2mt;
     p2m_access_t a;
     gfn_t gfn = _gfn(gfn_l);
@@ -1535,11 +1525,8 @@  int clear_identity_p2m_entry(struct doma
 
     if ( !paging_mode_translate(d) )
     {
-#endif
-        if ( !is_iommu_enabled(d) )
-            return 0;
-        return iommu_legacy_unmap(d, _dfn(gfn_l), 1ul << PAGE_ORDER_4K);
-#ifdef CONFIG_HVM
+        ASSERT_UNREACHABLE();
+        return -EPERM;
     }
 
     gfn_lock(p2m, gfn, 0);
@@ -1561,7 +1548,6 @@  int clear_identity_p2m_entry(struct doma
     }
 
     return ret;
-#endif
 }
 
 #ifdef CONFIG_MEM_SHARING
@@ -1606,8 +1592,6 @@  int set_shared_p2m_entry(struct domain *
 
 #endif /* CONFIG_MEM_SHARING */
 
-#ifdef CONFIG_HVM
-
 static struct p2m_domain *
 p2m_getlru_nestedp2m(struct domain *d, struct p2m_domain *p2m)
 {
--- a/xen/arch/x86/mm/physmap.c
+++ b/xen/arch/x86/mm/physmap.c
@@ -21,6 +21,7 @@ 
  * along with this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <xen/iommu.h>
 #include <asm/p2m.h>
 
 #include "mm-locks.h"
@@ -75,6 +76,33 @@  guest_physmap_remove_page(struct domain
     return p2m_remove_page(d, gfn, mfn, page_order);
 }
 
+int set_identity_p2m_entry(struct domain *d, unsigned long gfn,
+                           p2m_access_t p2ma, unsigned int flag)
+{
+    if ( !paging_mode_translate(d) )
+    {
+        if ( !is_iommu_enabled(d) )
+            return 0;
+        return iommu_legacy_map(d, _dfn(gfn), _mfn(gfn),
+                                1ul << PAGE_ORDER_4K,
+                                p2m_access_to_iommu_flags(p2ma));
+    }
+
+    return p2m_add_identity_entry(d, gfn, p2ma, flag);
+}
+
+int clear_identity_p2m_entry(struct domain *d, unsigned long gfn)
+{
+    if ( !paging_mode_translate(d) )
+    {
+        if ( !is_iommu_enabled(d) )
+            return 0;
+        return iommu_legacy_unmap(d, _dfn(gfn), 1ul << PAGE_ORDER_4K);
+    }
+
+    return p2m_remove_identity_entry(d, gfn);
+}
+
 /*
  * Local variables:
  * mode: C
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -373,7 +373,7 @@  void __hwdom_init arch_iommu_hwdom_init(
         if ( !hwdom_iommu_map(d, pfn, max_pfn) )
             rc = 0;
         else if ( paging_mode_translate(d) )
-            rc = set_identity_p2m_entry(d, pfn, p2m_access_rw, 0);
+            rc = p2m_add_identity_entry(d, pfn, p2m_access_rw, 0);
         else
             rc = iommu_map(d, _dfn(pfn), _mfn(pfn), 1ul << PAGE_ORDER_4K,
                            IOMMUF_readable | IOMMUF_writable, &flush_flags);
--- a/xen/drivers/vpci/msix.c
+++ b/xen/drivers/vpci/msix.c
@@ -409,7 +409,7 @@  int vpci_make_msix_hole(const struct pci
             case p2m_mmio_direct:
                 if ( mfn_x(mfn) == start )
                 {
-                    clear_identity_p2m_entry(d, start);
+                    p2m_remove_identity_entry(d, start);
                     break;
                 }
                 /* fallthrough. */
--- a/xen/arch/x86/include/asm/p2m.h
+++ b/xen/arch/x86/include/asm/p2m.h
@@ -637,6 +637,10 @@  int set_mmio_p2m_entry(struct domain *d,
 int set_identity_p2m_entry(struct domain *d, unsigned long gfn,
                            p2m_access_t p2ma, unsigned int flag);
 int clear_identity_p2m_entry(struct domain *d, unsigned long gfn);
+/* HVM-only callers can use these directly: */
+int p2m_add_identity_entry(struct domain *d, unsigned long gfn,
+                           p2m_access_t p2ma, unsigned int flag);
+int p2m_remove_identity_entry(struct domain *d, unsigned long gfn);
 
 /* 
  * Populate-on-demand