From patchwork Fri Oct 24 10:18:41 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurent Pinchart X-Patchwork-Id: 5145431 Return-Path: X-Original-To: patchwork-linux-sh@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 271729F349 for ; Fri, 24 Oct 2014 10:19:58 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 50F7F201F5 for ; Fri, 24 Oct 2014 10:19:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 57DB72017A for ; Fri, 24 Oct 2014 10:19:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932367AbaJXKTX (ORCPT ); Fri, 24 Oct 2014 06:19:23 -0400 Received: from galahad.ideasonboard.com ([185.26.127.97]:44724 "EHLO galahad.ideasonboard.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756094AbaJXKSt (ORCPT ); Fri, 24 Oct 2014 06:18:49 -0400 Received: from avalon.ideasonboard.com (dsl-hkibrasgw3-50ddcc-40.dhcp.inet.fi [80.221.204.40]) by galahad.ideasonboard.com (Postfix) with ESMTPSA id 0BB55223C8; Fri, 24 Oct 2014 12:16:46 +0200 (CEST) From: Laurent Pinchart To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-sh@vger.kernel.org, Marek Szyprowski , Russell King - ARM Linux , Michal Nazarewicz , Joonsoo Kim , Weijie Yang Subject: [PATCH v2 3/4] mm: cma: Ensure that reservations never cross the low/high mem boundary Date: Fri, 24 Oct 2014 13:18:41 +0300 Message-Id: <1414145922-26042-4-git-send-email-laurent.pinchart+renesas@ideasonboard.com> X-Mailer: git-send-email 2.0.4 In-Reply-To: <1414145922-26042-1-git-send-email-laurent.pinchart+renesas@ideasonboard.com> References: <1414145922-26042-1-git-send-email-laurent.pinchart+renesas@ideasonboard.com> Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org X-Spam-Status: No, score=-8.3 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Commit 95b0e655f914 ("ARM: mm: don't limit default CMA region only to low memory") extended CMA memory reservation to allow usage of high memory. It relied on commit f7426b983a6a ("mm: cma: adjust address limit to avoid hitting low/high memory boundary") to ensure that the reserved block never crossed the low/high memory boundary. While the implementation correctly lowered the limit, it failed to consider the case where the base..limit range crossed the low/high memory boundary with enough space on each side to reserve the requested size on either low or high memory. Rework the base and limit adjustment to fix the problem. The function now starts by rejecting the reservation altogether for fixed reservations that cross the boundary, tries to reserve from high memory first and then falls back to low memory. Signed-off-by: Laurent Pinchart Acked-by: Michal Nazarewicz --- mm/cma.c | 49 +++++++++++++++++++++++++++++++++---------------- 1 file changed, 33 insertions(+), 16 deletions(-) diff --git a/mm/cma.c b/mm/cma.c index 62a5dccc3fb8..c30a6edee65c 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -253,23 +253,24 @@ int __init cma_declare_contiguous(phys_addr_t base, return -EINVAL; /* - * adjust limit to avoid crossing low/high memory boundary for - * automatically allocated regions + * If allocating at a fixed base the request region must not cross the + * low/high memory boundary. */ - if (((limit == 0 || limit > memblock_end) && - (memblock_end - size < highmem_start && - memblock_end > highmem_start)) || - (!fixed && limit > highmem_start && limit - size < highmem_start)) { - limit = highmem_start; - } - - if (fixed && base < highmem_start && base+size > highmem_start) { + if (fixed && base < highmem_start && base + size > highmem_start) { ret = -EINVAL; pr_err("Region at %08lx defined on low/high memory boundary (%08lx)\n", (unsigned long)base, (unsigned long)highmem_start); goto err; } + /* + * If the limit is unspecified or above the memblock end, its effective + * value will be the memblock end. Set it explicitly to simplify further + * checks. + */ + if (limit == 0 || limit > memblock_end) + limit = memblock_end; + /* Reserve memory */ if (fixed) { if (memblock_is_region_reserved(base, size) || @@ -278,14 +279,30 @@ int __init cma_declare_contiguous(phys_addr_t base, goto err; } } else { - phys_addr_t addr = memblock_alloc_range(size, alignment, base, - limit); + phys_addr_t addr = 0; + + /* + * All pages in the reserved area must come from the same zone. + * If the requested region crosses the low/high memory boundary, + * try allocating from high memory first and fall back to low + * memory in case of failure. + */ + if (base < highmem_start && limit > highmem_start) { + addr = memblock_alloc_range(size, alignment, + highmem_start, limit); + limit = highmem_start; + } + if (!addr) { - ret = -ENOMEM; - goto err; - } else { - base = addr; + addr = memblock_alloc_range(size, alignment, base, + limit); + if (!addr) { + ret = -ENOMEM; + goto err; + } } + + base = addr; } ret = cma_init_reserved_mem(base, size, order_per_bit, res_cma);