diff mbox series

x86/shadow: fold p2m page accounting into sh_min_allocation()

Message ID 9deab964-2685-3c04-9e4c-e3df04885742@suse.com (mailing list archive)
State New, archived
Headers show
Series x86/shadow: fold p2m page accounting into sh_min_allocation() | expand

Commit Message

Jan Beulich Sept. 5, 2019, 8:34 a.m. UTC
This is to make the function live up to the promise its name makes. And
it simplifies all callers.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

Comments

Roger Pau Monne Sept. 6, 2019, 11:27 a.m. UTC | #1
On Thu, Sep 05, 2019 at 10:34:47AM +0200, Jan Beulich wrote:
> This is to make the function live up to the promise its name makes. And
> it simplifies all callers.
> 
> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.
Andrew Cooper Sept. 6, 2019, 12:14 p.m. UTC | #2
On 05/09/2019 09:34, Jan Beulich wrote:
> This is to make the function live up to the promise its name makes. And
> it simplifies all callers.
>
> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

I haven't looked at the calculations in detail, but from an end code
point of view, this is much better.
Tim Deegan Sept. 11, 2019, 7:14 a.m. UTC | #3
At 10:34 +0200 on 05 Sep (1567679687), Jan Beulich wrote:
> This is to make the function live up to the promise its name makes. And
> it simplifies all callers.
> 
> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Tim Deegan <tim@xen.org>
diff mbox series

Patch

--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -1256,29 +1256,26 @@  static unsigned int sh_min_allocation(co
      * up of slot zero and an LAPIC page), plus one for HVM's 1-to-1 pagetable.
      */
     return shadow_min_acceptable_pages(d) +
-           max(d->tot_pages / 256,
-               is_hvm_domain(d) ? CONFIG_PAGING_LEVELS + 2 : 0U) +
-           is_hvm_domain(d);
+           max(max(d->tot_pages / 256,
+                   is_hvm_domain(d) ? CONFIG_PAGING_LEVELS + 2 : 0U) +
+               is_hvm_domain(d),
+               d->arch.paging.shadow.p2m_pages);
 }
 
 int shadow_set_allocation(struct domain *d, unsigned int pages, bool *preempted)
 {
     struct page_info *sp;
-    unsigned int lower_bound;
 
     ASSERT(paging_locked_by_me(d));
 
     if ( pages > 0 )
     {
         /* Check for minimum value. */
-        if ( pages < d->arch.paging.shadow.p2m_pages )
-            pages = 0;
-        else
-            pages -= d->arch.paging.shadow.p2m_pages;
+        unsigned int lower_bound = sh_min_allocation(d);
 
-        lower_bound = sh_min_allocation(d);
         if ( pages < lower_bound )
             pages = lower_bound;
+        pages -= d->arch.paging.shadow.p2m_pages;
     }
 
     SHADOW_PRINTK("current %i target %i\n",
@@ -2607,7 +2604,7 @@  int shadow_enable(struct domain *d, u32
 
     /* Init the shadow memory allocation if the user hasn't done so */
     old_pages = d->arch.paging.shadow.total_pages;
-    if ( old_pages < sh_min_allocation(d) + d->arch.paging.shadow.p2m_pages )
+    if ( old_pages < sh_min_allocation(d) )
     {
         paging_lock(d);
         rv = shadow_set_allocation(d, 1024, NULL); /* Use at least 4MB */
@@ -2864,8 +2861,7 @@  static int shadow_one_bit_enable(struct
 
     mode |= PG_SH_enable;
 
-    if ( d->arch.paging.shadow.total_pages <
-         sh_min_allocation(d) + d->arch.paging.shadow.p2m_pages )
+    if ( d->arch.paging.shadow.total_pages < sh_min_allocation(d) )
     {
         /* Init the shadow memory allocation if the user hasn't done so */
         if ( shadow_set_allocation(d, 1, NULL) != 0 )