diff mbox

[v2,07/10] PPC, KVM, CMA: use general CMA reserved area management framework

Message ID 87mwdfg9co.fsf@linux.vnet.ibm.com (mailing list archive)
State New, archived
Headers show

Commit Message

Aneesh Kumar K.V June 14, 2014, 8:53 a.m. UTC
Joonsoo Kim <iamjoonsoo.kim@lge.com> writes:

> Now, we have general CMA reserved area management framework,
> so use it for future maintainabilty. There is no functional change.
>
> Acked-by: Michal Nazarewicz <mina86@mina86.com>
> Acked-by: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Need this. We may want to keep the VM_BUG_ON by moving
KVM_CMA_CHUNK_ORDER around.




-aneesh

Comments

Joonsoo Kim June 16, 2014, 5:34 a.m. UTC | #1
On Sat, Jun 14, 2014 at 02:23:59PM +0530, Aneesh Kumar K.V wrote:
> Joonsoo Kim <iamjoonsoo.kim@lge.com> writes:
> 
> > Now, we have general CMA reserved area management framework,
> > so use it for future maintainabilty. There is no functional change.
> >
> > Acked-by: Michal Nazarewicz <mina86@mina86.com>
> > Acked-by: Paolo Bonzini <pbonzini@redhat.com>
> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> 
> Need this. We may want to keep the VM_BUG_ON by moving
> KVM_CMA_CHUNK_ORDER around.
> 
> diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
> index 8056107..1932e0e 100644
> --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
> +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
> @@ -37,8 +37,6 @@
>  #include <asm/ppc-opcode.h>
>  #include <asm/cputable.h>
>  
> -#include "book3s_hv_cma.h"
> -
>  /* POWER7 has 10-bit LPIDs, PPC970 has 6-bit LPIDs */
>  #define MAX_LPID_970   63
>  
> @@ -64,7 +62,6 @@ long kvmppc_alloc_hpt(struct kvm *kvm, u32 *htab_orderp)
>         }
>  
>         kvm->arch.hpt_cma_alloc = 0;
> -       VM_BUG_ON(order < KVM_CMA_CHUNK_ORDER);
>         page = kvm_alloc_hpt(1 << (order - PAGE_SHIFT));
>         if (page) {
>                 hpt = (unsigned long)pfn_to_kaddr(page_to_pfn(page));
> 
> 
> 
> -aneesh

Okay.
So do you also want this?

@@ -131,16 +135,18 @@ struct page *kvm_alloc_hpt(unsigned long nr_pages)
 {
        unsigned long align_pages = HPT_ALIGN_PAGES;
 
+       VM_BUG_ON(get_order(nr_pages) < KVM_CMA_CHUNK_ORDER - PAGE_SHIFT);
+
        /* Old CPUs require HPT aligned on a multiple of its size */
        if (!cpu_has_feature(CPU_FTR_ARCH_206))
                align_pages = nr_pages;
-       return kvm_alloc_cma(nr_pages, align_pages);
+       return cma_alloc(kvm_cma, nr_pages, get_order(align_pages));
 }

Thanks.
Aneesh Kumar K.V June 16, 2014, 7:02 a.m. UTC | #2
Joonsoo Kim <iamjoonsoo.kim@lge.com> writes:

> On Sat, Jun 14, 2014 at 02:23:59PM +0530, Aneesh Kumar K.V wrote:
>> Joonsoo Kim <iamjoonsoo.kim@lge.com> writes:
>> 
>> > Now, we have general CMA reserved area management framework,
>> > so use it for future maintainabilty. There is no functional change.
>> >
>> > Acked-by: Michal Nazarewicz <mina86@mina86.com>
>> > Acked-by: Paolo Bonzini <pbonzini@redhat.com>
>> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
>> 
>> Need this. We may want to keep the VM_BUG_ON by moving
>> KVM_CMA_CHUNK_ORDER around.
>> 
>> diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
>> index 8056107..1932e0e 100644
>> --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
>> +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
>> @@ -37,8 +37,6 @@
>>  #include <asm/ppc-opcode.h>
>>  #include <asm/cputable.h>
>>  
>> -#include "book3s_hv_cma.h"
>> -
>>  /* POWER7 has 10-bit LPIDs, PPC970 has 6-bit LPIDs */
>>  #define MAX_LPID_970   63
>>  
>> @@ -64,7 +62,6 @@ long kvmppc_alloc_hpt(struct kvm *kvm, u32 *htab_orderp)
>>         }
>>  
>>         kvm->arch.hpt_cma_alloc = 0;
>> -       VM_BUG_ON(order < KVM_CMA_CHUNK_ORDER);
>>         page = kvm_alloc_hpt(1 << (order - PAGE_SHIFT));
>>         if (page) {
>>                 hpt = (unsigned long)pfn_to_kaddr(page_to_pfn(page));
>> 
>> 
>> 
>> -aneesh
>
> Okay.
> So do you also want this?
>
> @@ -131,16 +135,18 @@ struct page *kvm_alloc_hpt(unsigned long nr_pages)
>  {
>         unsigned long align_pages = HPT_ALIGN_PAGES;
>
> +       VM_BUG_ON(get_order(nr_pages) < KVM_CMA_CHUNK_ORDER - PAGE_SHIFT);
> +
>         /* Old CPUs require HPT aligned on a multiple of its size */
>         if (!cpu_has_feature(CPU_FTR_ARCH_206))
>                 align_pages = nr_pages;
> -       return kvm_alloc_cma(nr_pages, align_pages);
> +       return cma_alloc(kvm_cma, nr_pages, get_order(align_pages));
>  }

That would also work.

Thanks
-aneesh
diff mbox

Patch

diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 8056107..1932e0e 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -37,8 +37,6 @@ 
 #include <asm/ppc-opcode.h>
 #include <asm/cputable.h>
 
-#include "book3s_hv_cma.h"
-
 /* POWER7 has 10-bit LPIDs, PPC970 has 6-bit LPIDs */
 #define MAX_LPID_970   63
 
@@ -64,7 +62,6 @@  long kvmppc_alloc_hpt(struct kvm *kvm, u32 *htab_orderp)
        }
 
        kvm->arch.hpt_cma_alloc = 0;
-       VM_BUG_ON(order < KVM_CMA_CHUNK_ORDER);
        page = kvm_alloc_hpt(1 << (order - PAGE_SHIFT));
        if (page) {
                hpt = (unsigned long)pfn_to_kaddr(page_to_pfn(page));