From patchwork Wed Jan 9 11:23:13 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michel Lespinasse X-Patchwork-Id: 1952121 Return-Path: X-Original-To: patchwork-linux-parisc@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id 9799B3FC5A for ; Wed, 9 Jan 2013 11:23:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757379Ab3AILXU (ORCPT ); Wed, 9 Jan 2013 06:23:20 -0500 Received: from mail-da0-f43.google.com ([209.85.210.43]:54983 "EHLO mail-da0-f43.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756730Ab3AILXT (ORCPT ); Wed, 9 Jan 2013 06:23:19 -0500 Received: by mail-da0-f43.google.com with SMTP id u36so701297dak.16 for ; Wed, 09 Jan 2013 03:23:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:date:from:to:cc:subject:message-id:references :mime-version:content-type:content-disposition:in-reply-to :user-agent; bh=SqHBNfd8qNSRnSFab7jwC+9LWyi6gLYTwPa0OL1MjhA=; b=EgqO4IOpk5m/gcGwU0e13qYby93Gkd7rUqtyyWJSQOrNKvpl0J/JbLX60EME9mzRdY TMmxFcA7dw2u/67X0FycLpvxDMGKhAd887zJB2gfAujEos+SINLvklkw0j8a67n2g3T9 p+y2yi+ks5Sy+nNMed2bh7N4j9oy9YRBOXgq1h+GUeN/lb972AfWVLZMpcmgAQuaYsZ6 Br/fvzgN7AP8eOHjhAIZmIimk+kCuDR0gftaYAyoOKXc9dYl0Vvkds9avGClOfhwKRP9 2bG++hCZG0ekNI5o+rW51Ct8xztlbN71XlKB13yg3N+6JWWBTuCO4xjkNxnRkw5EYcHe Np5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:date:from:to:cc:subject:message-id:references :mime-version:content-type:content-disposition:in-reply-to :user-agent:x-gm-message-state; bh=SqHBNfd8qNSRnSFab7jwC+9LWyi6gLYTwPa0OL1MjhA=; b=Mm4OCGuEgmra8NK7j6gVQCiwpbgk7sk4bIvU6+iRCEWDfRIbxR07khKw3p6aWRZtZN oWxLrKNEkaDXpBk5YNFxdH7NfiSTJ99SDBWAtfByWVYDNZTE33LsWO3n2HC/lC2AKILh H7iuIIytq6vySHFlbyxAUg/aT4TbYpcEpavkV+qy6nXisVfkkhQjiCndJoBnKxA31fwS zaC4ckum4cFKb2dmYHPsSgLC92bjnX8GJvZRN+3p8ohZ1dUgi4oPBzYX5QuWVuVsEOAf SfzlZ44mTjf7dzCSFapQZ7ztvEqARAg5QYO6YmfO8Fl4ixoIT1vDz3eGRo/9pskucPUn gFcg== X-Received: by 10.66.79.195 with SMTP id l3mr187832859pax.82.1357730598267; Wed, 09 Jan 2013 03:23:18 -0800 (PST) Received: from google.com ([2620:0:1000:3003:baac:6fff:fe98:d63f]) by mx.google.com with ESMTPS id nf9sm41481134pbc.17.2013.01.09.03.23.15 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 09 Jan 2013 03:23:17 -0800 (PST) Date: Wed, 9 Jan 2013 03:23:13 -0800 From: Michel Lespinasse To: Benjamin Herrenschmidt Cc: Rik van Riel , "James E.J. Bottomley" , Matt Turner , David Howells , Tony Luck , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , linuxppc-dev@lists.ozlabs.org, linux-parisc@vger.kernel.org, linux-alpha@vger.kernel.org, linux-ia64@vger.kernel.org Subject: Re: [PATCH 7/8] mm: use vm_unmapped_area() on powerpc architecture Message-ID: <20130109112313.GA4905@google.com> References: <1357694895-520-1-git-send-email-walken@google.com> <1357694895-520-8-git-send-email-walken@google.com> <1357697739.4838.30.camel@pasglop> <1357702376.4838.32.camel@pasglop> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <1357702376.4838.32.camel@pasglop> User-Agent: Mutt/1.5.21 (2010-09-15) X-Gm-Message-State: ALoCoQlNE8HC3mtO8ny55bVbTCIS3qrkOiZ+YyxOA8d3vh6FFhrhppISHEwRlmdOVaaS0P/XxR2O4XqZtUF+xJSP7qRlfiTVi8O+7LTfACbz4ce7615E7YVUtrKeXEoWKcQxZNVnjGiHvb22CRe8vPKPx0Ul49bgnL5RBDmD0SJFT7S7FXWhhee6kAW0yMqTjnXfO1w++ukoEwjaDN5K5tUA6544pzR+Xg== Sender: linux-parisc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-parisc@vger.kernel.org On Wed, Jan 09, 2013 at 02:32:56PM +1100, Benjamin Herrenschmidt wrote: > Ok. I think at least you can move that construct: > > + if (addr < SLICE_LOW_TOP) { > + slice = GET_LOW_SLICE_INDEX(addr); > + addr = (slice + 1) << SLICE_LOW_SHIFT; > + if (!(available.low_slices & (1u << slice))) > + continue; > + } else { > + slice = GET_HIGH_SLICE_INDEX(addr); > + addr = (slice + 1) << SLICE_HIGH_SHIFT; > + if (!(available.high_slices & (1u << slice))) > + continue; > + } > > Into some kind of helper. It will probably compile to the same thing but > at least it's more readable and it will avoid a fuckup in the future if > somebody changes the algorithm and forgets to update one of the > copies :-) All right, does the following look more palatable then ? (didn't re-test it, though) Signed-off-by: Michel Lespinasse Acked-by: Rik van Riel --- arch/powerpc/mm/slice.c | 123 ++++++++++++++++++++++++++++++----------------- 1 files changed, 78 insertions(+), 45 deletions(-) diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c index 999a74f25ebe..3e99c149271a 100644 --- a/arch/powerpc/mm/slice.c +++ b/arch/powerpc/mm/slice.c @@ -237,36 +237,69 @@ static void slice_convert(struct mm_struct *mm, struct slice_mask mask, int psiz #endif } +/* + * Compute which slice addr is part of; + * set *boundary_addr to the start or end boundary of that slice + * (depending on 'end' parameter); + * return boolean indicating if the slice is marked as available in the + * 'available' slice_mark. + */ +static bool slice_scan_available(unsigned long addr, + struct slice_mask available, + int end, + unsigned long *boundary_addr) +{ + unsigned long slice; + if (addr < SLICE_LOW_TOP) { + slice = GET_LOW_SLICE_INDEX(addr); + *boundary_addr = (slice + end) << SLICE_LOW_SHIFT; + return !!(available.low_slices & (1u << slice)); + } else { + slice = GET_HIGH_SLICE_INDEX(addr); + *boundary_addr = (slice + end) ? + ((slice + end) << SLICE_HIGH_SHIFT) : SLICE_LOW_TOP; + return !!(available.high_slices & (1u << slice)); + } +} + static unsigned long slice_find_area_bottomup(struct mm_struct *mm, unsigned long len, struct slice_mask available, int psize) { - struct vm_area_struct *vma; - unsigned long addr; - struct slice_mask mask; int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT); + unsigned long addr, found, next_end; + struct vm_unmapped_area_info info; - addr = TASK_UNMAPPED_BASE; - - for (;;) { - addr = _ALIGN_UP(addr, 1ul << pshift); - if ((TASK_SIZE - len) < addr) - break; - vma = find_vma(mm, addr); - BUG_ON(vma && (addr >= vma->vm_end)); + info.flags = 0; + info.length = len; + info.align_mask = PAGE_MASK & ((1ul << pshift) - 1); + info.align_offset = 0; - mask = slice_range_to_mask(addr, len); - if (!slice_check_fit(mask, available)) { - if (addr < SLICE_LOW_TOP) - addr = _ALIGN_UP(addr + 1, 1ul << SLICE_LOW_SHIFT); - else - addr = _ALIGN_UP(addr + 1, 1ul << SLICE_HIGH_SHIFT); + addr = TASK_UNMAPPED_BASE; + while (addr < TASK_SIZE) { + info.low_limit = addr; + if (!slice_scan_available(addr, available, 1, &addr)) continue; + + next_slice: + /* + * At this point [info.low_limit; addr) covers + * available slices only and ends at a slice boundary. + * Check if we need to reduce the range, or if we can + * extend it to cover the next available slice. + */ + if (addr >= TASK_SIZE) + addr = TASK_SIZE; + else if (slice_scan_available(addr, available, 1, &next_end)) { + addr = next_end; + goto next_slice; } - if (!vma || addr + len <= vma->vm_start) - return addr; - addr = vma->vm_end; + info.high_limit = addr; + + found = vm_unmapped_area(&info); + if (!(found & ~PAGE_MASK)) + return found; } return -ENOMEM; @@ -277,39 +310,39 @@ static unsigned long slice_find_area_topdown(struct mm_struct *mm, struct slice_mask available, int psize) { - struct vm_area_struct *vma; - unsigned long addr; - struct slice_mask mask; int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT); + unsigned long addr, found, prev; + struct vm_unmapped_area_info info; - addr = mm->mmap_base; - while (addr > len) { - /* Go down by chunk size */ - addr = _ALIGN_DOWN(addr - len, 1ul << pshift); + info.flags = VM_UNMAPPED_AREA_TOPDOWN; + info.length = len; + info.align_mask = PAGE_MASK & ((1ul << pshift) - 1); + info.align_offset = 0; - /* Check for hit with different page size */ - mask = slice_range_to_mask(addr, len); - if (!slice_check_fit(mask, available)) { - if (addr < SLICE_LOW_TOP) - addr = _ALIGN_DOWN(addr, 1ul << SLICE_LOW_SHIFT); - else if (addr < (1ul << SLICE_HIGH_SHIFT)) - addr = SLICE_LOW_TOP; - else - addr = _ALIGN_DOWN(addr, 1ul << SLICE_HIGH_SHIFT); + addr = mm->mmap_base; + while (addr > PAGE_SIZE) { + info.high_limit = addr; + if (!slice_scan_available(addr - 1, available, 0, &addr)) continue; - } + prev_slice: /* - * Lookup failure means no vma is above this address, - * else if new region fits below vma->vm_start, - * return with success: + * At this point [addr; info.high_limit) covers + * available slices only and starts at a slice boundary. + * Check if we need to reduce the range, or if we can + * extend it to cover the previous available slice. */ - vma = find_vma(mm, addr); - if (!vma || (addr + len) <= vma->vm_start) - return addr; + if (addr < PAGE_SIZE) + addr = PAGE_SIZE; + else if (slice_scan_available(addr - 1, available, 0, &prev)) { + addr = prev; + goto prev_slice; + } + info.low_limit = addr; - /* try just below the current vma->vm_start */ - addr = vma->vm_start; + found = vm_unmapped_area(&info); + if (!(found & ~PAGE_MASK)) + return found; } /*