From patchwork Mon Jun 16 05:40:50 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 4356401 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 1EA1EBEEAA for ; Mon, 16 Jun 2014 05:39:26 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 3DF7620272 for ; Mon, 16 Jun 2014 05:39:25 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 530D22026C for ; Mon, 16 Jun 2014 05:39:24 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WwPbY-0008Ol-2Y; Mon, 16 Jun 2014 05:37:16 +0000 Received: from lgeamrelo01.lge.com ([156.147.1.125]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WwPbK-0008Hk-ED for linux-arm-kernel@lists.infradead.org; Mon, 16 Jun 2014 05:37:04 +0000 Received: from unknown (HELO js1304-P5Q-DELUXE.LGE.NET) (10.177.220.145) by 156.147.1.125 with ESMTP; 16 Jun 2014 14:36:41 +0900 X-Original-SENDERIP: 10.177.220.145 X-Original-MAILFROM: iamjoonsoo.kim@lge.com From: Joonsoo Kim To: Andrew Morton , "Aneesh Kumar K.V" , Marek Szyprowski , Michal Nazarewicz Subject: [PATCH v3 -next 8/9] mm, CMA: change cma_declare_contiguous() to obey coding convention Date: Mon, 16 Jun 2014 14:40:50 +0900 Message-Id: <1402897251-23639-9-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1402897251-23639-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1402897251-23639-1-git-send-email-iamjoonsoo.kim@lge.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140615_223702_811140_43F61DB3 X-CRM114-Status: GOOD ( 11.46 ) X-Spam-Score: -0.7 (/) Cc: Russell King - ARM Linux , kvm@vger.kernel.org, linux-mm@kvack.org, Gleb Natapov , Greg Kroah-Hartman , Alexander Graf , kvm-ppc@vger.kernel.org, linux-kernel@vger.kernel.org, Minchan Kim , Paul Mackerras , Benjamin Herrenschmidt , Paolo Bonzini , Joonsoo Kim , Zhang Yanfei , linuxppc-dev@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.9 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Conventionally, we put output param to the end of param list and put the 'base' ahead of 'size', but cma_declare_contiguous() doesn't look like that, so change it. Additionally, move down cma_areas reference code to the position where it is really needed. v3: put 'base' ahead of 'size' (Minchan) Acked-by: Michal Nazarewicz Reviewed-by: Aneesh Kumar K.V Signed-off-by: Joonsoo Kim diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c b/arch/powerpc/kvm/book3s_hv_builtin.c index 3960e0b..6cf498a 100644 --- a/arch/powerpc/kvm/book3s_hv_builtin.c +++ b/arch/powerpc/kvm/book3s_hv_builtin.c @@ -185,8 +185,8 @@ void __init kvm_cma_reserve(void) align_size = HPT_ALIGN_PAGES << PAGE_SHIFT; align_size = max(kvm_rma_pages << PAGE_SHIFT, align_size); - cma_declare_contiguous(selected_size, 0, 0, align_size, - KVM_CMA_CHUNK_ORDER - PAGE_SHIFT, &kvm_cma, false); + cma_declare_contiguous(0, selected_size, 0, align_size, + KVM_CMA_CHUNK_ORDER - PAGE_SHIFT, false, &kvm_cma); } } diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c index 0411c1c..6606abd 100644 --- a/drivers/base/dma-contiguous.c +++ b/drivers/base/dma-contiguous.c @@ -165,7 +165,7 @@ int __init dma_contiguous_reserve_area(phys_addr_t size, phys_addr_t base, { int ret; - ret = cma_declare_contiguous(size, base, limit, 0, 0, res_cma, fixed); + ret = cma_declare_contiguous(base, size, limit, 0, 0, fixed, res_cma); if (ret) return ret; diff --git a/include/linux/cma.h b/include/linux/cma.h index 69d3726..32cab7a 100644 --- a/include/linux/cma.h +++ b/include/linux/cma.h @@ -15,7 +15,7 @@ extern unsigned long cma_get_size(struct cma *cma); extern int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base, phys_addr_t limit, phys_addr_t alignment, unsigned int order_per_bit, - struct cma **res_cma, bool fixed); + bool fixed, struct cma **res_cma); extern struct page *cma_alloc(struct cma *cma, int count, unsigned int align); extern bool cma_release(struct cma *cma, struct page *pages, int count); #endif diff --git a/mm/cma.c b/mm/cma.c index b442a13..9961120 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -141,13 +141,13 @@ core_initcall(cma_init_reserved_areas); /** * cma_declare_contiguous() - reserve custom contiguous area - * @size: Size of the reserved area (in bytes), * @base: Base address of the reserved area optional, use 0 for any + * @size: Size of the reserved area (in bytes), * @limit: End address of the reserved memory (optional, 0 for any). * @alignment: Alignment for the CMA area, should be power of 2 or zero * @order_per_bit: Order of pages represented by one bit on bitmap. - * @res_cma: Pointer to store the created cma region. * @fixed: hint about where to place the reserved area + * @res_cma: Pointer to store the created cma region. * * This function reserves memory from early allocator. It should be * called by arch specific code once the early allocator (memblock or bootmem) @@ -157,12 +157,12 @@ core_initcall(cma_init_reserved_areas); * If @fixed is true, reserve contiguous area at exactly @base. If false, * reserve in range from @base to @limit. */ -int __init cma_declare_contiguous(phys_addr_t size, - phys_addr_t base, phys_addr_t limit, +int __init cma_declare_contiguous(phys_addr_t base, + phys_addr_t size, phys_addr_t limit, phys_addr_t alignment, unsigned int order_per_bit, - struct cma **res_cma, bool fixed) + bool fixed, struct cma **res_cma) { - struct cma *cma = &cma_areas[cma_area_count]; + struct cma *cma; int ret = 0; pr_debug("%s(size %lx, base %08lx, limit %08lx alignment %08lx)\n", @@ -218,6 +218,7 @@ int __init cma_declare_contiguous(phys_addr_t size, * Each reserved area must be initialised later, when more kernel * subsystems (like slab allocator) are available. */ + cma = &cma_areas[cma_area_count]; cma->base_pfn = PFN_DOWN(base); cma->count = size >> PAGE_SHIFT; cma->order_per_bit = order_per_bit;