From patchwork Wed Nov 5 20:07:55 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Fong X-Patchwork-Id: 5237821 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 247EBC11AC for ; Wed, 5 Nov 2014 20:10:22 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 50BAE2010C for ; Wed, 5 Nov 2014 20:10:21 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 74B082015E for ; Wed, 5 Nov 2014 20:10:20 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Xm6sT-0001ix-Pt; Wed, 05 Nov 2014 20:08:25 +0000 Received: from mail-pa0-x22c.google.com ([2607:f8b0:400e:c03::22c]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Xm6sQ-0001eG-0q for linux-arm-kernel@lists.infradead.org; Wed, 05 Nov 2014 20:08:22 +0000 Received: by mail-pa0-f44.google.com with SMTP id bj1so1442889pad.31 for ; Wed, 05 Nov 2014 12:08:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=UVU581XagUszjXbWyMqI+xyrUdAharOveyFnPxJzl/c=; b=db7n0K5YAZ1SA88tIod2eBGKQuMf2iKbihFIGXRi9ZNX7hoFtPKaiE7/sNsF6Y4ULI NKnRQmp9Eo8grFO6OZh9KMVuyDDdUJGFDe+vH4wzjk3zXh9cgXpvFFiz/cpEB8u8FaxZ zceis1yoN7CFTT6Z9W4hlJrZtJvWLQ5uf26uPIIhbB1aiXqfdgX2a2E+1f8pOnXm2fZn GWDdCN7CyFuX/iAJiXncAYYYvUZzfESCs5ESZq/dOgA4IXkd0StzxPVeNQnSJUV6pGej 2X7jl5LjdaKtMW5ZX44pYxEnBj4TRXIMilRb5JhTKqsquHJSRlpHpFzGY7FuFPQDp7JQ 1YEg== X-Received: by 10.70.22.176 with SMTP id e16mr38489079pdf.89.1415218083826; Wed, 05 Nov 2014 12:08:03 -0800 (PST) Received: from gregory-irv-00.broadcom.com (5520-maca-inet1-outside.broadcom.com. [216.31.211.11]) by mx.google.com with ESMTPSA id 3sm3938743pdv.47.2014.11.05.12.08.02 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 05 Nov 2014 12:08:02 -0800 (PST) From: Gregory Fong To: linux-mm@kvack.org Subject: [PATCH 2/2] mm: cma: Align to physical address, not CMA region position Date: Wed, 5 Nov 2014 12:07:55 -0800 Message-Id: <1415218078-10078-2-git-send-email-gregory.0xf0@gmail.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1415218078-10078-1-git-send-email-gregory.0xf0@gmail.com> References: <1415218078-10078-1-git-send-email-gregory.0xf0@gmail.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20141105_120822_097062_92C05EC6 X-CRM114-Status: GOOD ( 13.81 ) X-Spam-Score: -0.6 (/) Cc: f.fainelli@gmail.com, Laura Abbott , open list , Michal Nazarewicz , Laurent Pinchart , "Aneesh Kumar K.V" , Gregory Fong , Andrew Morton , Joonsoo Kim , linux-arm-kernel@lists.infradead.org, Marek Szyprowski X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.4 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_NONE, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The alignment in cma_alloc() was done w.r.t. the bitmap. This is a problem when, for example: - a device requires 16M (order 12) alignment - the CMA region is not 16 M aligned In such a case, can result with the CMA region starting at, say, 0x2f800000 but any allocation you make from there will be aligned from there. Requesting an allocation of 32 M with 16 M alignment will result in an allocation from 0x2f800000 to 0x31800000, which doesn't work very well if your strange device requires 16M alignment. Change to use bitmap_find_next_zero_area_off() to account for the difference in alignment at reserve-time and alloc-time. Cc: Michal Nazarewicz Signed-off-by: Gregory Fong Acked-by: Michal Nazarewicz --- mm/cma.c | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-) diff --git a/mm/cma.c b/mm/cma.c index fde706e..0813599 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -63,6 +63,17 @@ static unsigned long cma_bitmap_aligned_mask(struct cma *cma, int align_order) return (1UL << (align_order - cma->order_per_bit)) - 1; } +static unsigned long cma_bitmap_aligned_offset(struct cma *cma, int align_order) +{ + unsigned int alignment; + + if (align_order <= cma->order_per_bit) + return 0; + alignment = 1UL << (align_order - cma->order_per_bit); + return ALIGN(cma->base_pfn, alignment) - + (cma->base_pfn >> cma->order_per_bit); +} + static unsigned long cma_bitmap_maxno(struct cma *cma) { return cma->count >> cma->order_per_bit; @@ -328,7 +339,7 @@ err: */ struct page *cma_alloc(struct cma *cma, int count, unsigned int align) { - unsigned long mask, pfn, start = 0; + unsigned long mask, offset, pfn, start = 0; unsigned long bitmap_maxno, bitmap_no, bitmap_count; struct page *page = NULL; int ret; @@ -343,13 +354,15 @@ struct page *cma_alloc(struct cma *cma, int count, unsigned int align) return NULL; mask = cma_bitmap_aligned_mask(cma, align); + offset = cma_bitmap_aligned_offset(cma, align); bitmap_maxno = cma_bitmap_maxno(cma); bitmap_count = cma_bitmap_pages_to_bits(cma, count); for (;;) { mutex_lock(&cma->lock); - bitmap_no = bitmap_find_next_zero_area(cma->bitmap, - bitmap_maxno, start, bitmap_count, mask); + bitmap_no = bitmap_find_next_zero_area_off(cma->bitmap, + bitmap_maxno, start, bitmap_count, mask, + offset); if (bitmap_no >= bitmap_maxno) { mutex_unlock(&cma->lock); break;