From patchwork Fri Jul 22 12:19:28 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Kara X-Patchwork-Id: 9243425 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 7708A60890 for ; Fri, 22 Jul 2016 12:19:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 670E627FA3 for ; Fri, 22 Jul 2016 12:19:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5685127FA5; Fri, 22 Jul 2016 12:19:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0B5CC27FA4 for ; Fri, 22 Jul 2016 12:19:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753548AbcGVMTx (ORCPT ); Fri, 22 Jul 2016 08:19:53 -0400 Received: from mx2.suse.de ([195.135.220.15]:49538 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753340AbcGVMTw (ORCPT ); Fri, 22 Jul 2016 08:19:52 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id E3164AC32; Fri, 22 Jul 2016 12:19:49 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id 9A6D61E0F19; Fri, 22 Jul 2016 14:19:47 +0200 (CEST) From: Jan Kara To: linux-mm@kvack.org Cc: linux-fsdevel@vger.kernel.org, linux-nvdimm@lists.01.org, Dan Williams , Ross Zwisler , Jan Kara Subject: [PATCH 02/15] mm: Propagate original vm_fault into do_fault_around() Date: Fri, 22 Jul 2016 14:19:28 +0200 Message-Id: <1469189981-19000-3-git-send-email-jack@suse.cz> X-Mailer: git-send-email 2.6.6 In-Reply-To: <1469189981-19000-1-git-send-email-jack@suse.cz> References: <1469189981-19000-1-git-send-email-jack@suse.cz> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Propagate vm_fault structure of the original fault into do_fault_around(). Currently it saves just two arguments of do_fault_around() but when adding more into struct vm_fault it will be a bigger win. Signed-off-by: Jan Kara --- mm/memory.c | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 4ee0aa96d78d..651accbe34cc 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2950,13 +2950,14 @@ late_initcall(fault_around_debugfs); * fault_around_pages() value (and therefore to page order). This way it's * easier to guarantee that we don't cross page table boundaries. */ -static void do_fault_around(struct vm_area_struct *vma, unsigned long address, - pte_t *pte, pgoff_t pgoff, unsigned int flags) +static void do_fault_around(struct vm_area_struct *vma, struct vm_fault *vmf, + pte_t *pte) { unsigned long start_addr, nr_pages, mask; - pgoff_t max_pgoff; - struct vm_fault vmf; + pgoff_t pgoff = vmf->pgoff, max_pgoff; + struct vm_fault vmfaround; int off; + unsigned long address = (unsigned long)vmf->virtual_address; nr_pages = READ_ONCE(fault_around_bytes) >> PAGE_SHIFT; mask = ~(nr_pages * PAGE_SIZE - 1) & PAGE_MASK; @@ -2985,10 +2986,10 @@ static void do_fault_around(struct vm_area_struct *vma, unsigned long address, pte++; } - init_vmf(&vmf, vma, start_addr, pgoff, flags); - vmf.pte = pte; - vmf.max_pgoff = max_pgoff; - vma->vm_ops->map_pages(vma, &vmf); + init_vmf(&vmfaround, vma, start_addr, pgoff, vmf->flags); + vmfaround.pte = pte; + vmfaround.max_pgoff = max_pgoff; + vma->vm_ops->map_pages(vma, &vmfaround); } static int do_read_fault(struct mm_struct *mm, struct vm_area_struct *vma, @@ -3006,7 +3007,7 @@ static int do_read_fault(struct mm_struct *mm, struct vm_area_struct *vma, */ if (vma->vm_ops->map_pages && fault_around_bytes >> PAGE_SHIFT > 1) { pte = pte_offset_map_lock(mm, pmd, address, &ptl); - do_fault_around(vma, address, pte, vmf->pgoff, vmf->flags); + do_fault_around(vma, vmf, pte); if (!pte_same(*pte, orig_pte)) goto unlock_out; pte_unmap_unlock(pte, ptl);