From patchwork Fri Jan 31 06:14:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11359255 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9077113A4 for ; Fri, 31 Jan 2020 06:14:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5C56B2173E for ; Fri, 31 Jan 2020 06:14:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="hYGYc10q" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5C56B2173E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E489A6B053F; Fri, 31 Jan 2020 01:14:43 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id DD2016B0541; Fri, 31 Jan 2020 01:14:43 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C4D4A6B0542; Fri, 31 Jan 2020 01:14:43 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id A76376B053F for ; Fri, 31 Jan 2020 01:14:43 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 5F5928248D7C for ; Fri, 31 Jan 2020 06:14:43 +0000 (UTC) X-FDA: 76436915646.13.hose45_2931ea3f6a313 X-Spam-Summary: 2,0,0,6822842248570477,d41d8cd98f00b204,akpm@linux-foundation.org,:akpm@linux-foundation.org:bharata@linux.ibm.com:chris@chrisdown.name:hch@lst.de:jgg@mellanox.com:jglisse@redhat.com:jhubbard@nvidia.com::mhocko@kernel.org:mm-commits@vger.kernel.org:rcampbell@nvidia.com:torvalds@linux-foundation.org,RULES_HIT:41:69:355:379:800:960:966:967:968:973:988:989:1260:1263:1345:1359:1381:1431:1437:1534:1543:1711:1730:1747:1777:1792:2196:2199:2393:2525:2559:2563:2682:2685:2693:2737:2859:2898:2902:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3354:3865:3867:3868:3870:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4321:4385:5007:6119:6261:6653:6737:7576:7903:7904:8599:8957:9025:9545:10004:10913:11026:11473:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12679:12683:12783:12986:13846:14181:14721:14849:21067:21080:21451:21627:21740:21939:21990:30054:30064:30070,0,RBL:error,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,Domain Cache:0, X-HE-Tag: hose45_2931ea3f6a313 X-Filterd-Recvd-Size: 4510 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf38.hostedemail.com (Postfix) with ESMTP for ; Fri, 31 Jan 2020 06:14:42 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id EF93C20663; Fri, 31 Jan 2020 06:14:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1580451282; bh=npJ/UvuoxHmfgKn7FAQgtxHRZoZIX07SYH4nGOB+EVg=; h=Date:From:To:Subject:In-Reply-To:From; b=hYGYc10qoL/CMIE22BVCUdRGZkkU3UVTzpIn4g3TyDRJ/lNuZ2AF3apIUEnH4cE4O a1fho8WQGlfCYNHFRdBEmnyChLV4GQ40gY9jaM/qjbeMwWGyjTSFDniVK4mn3yZna4 twAHNvJoscNUrjSLZ9zyDTcCAW143tsBa7qsf5n0= Date: Thu, 30 Jan 2020 22:14:41 -0800 From: Andrew Morton To: akpm@linux-foundation.org, bharata@linux.ibm.com, chris@chrisdown.name, hch@lst.de, jgg@mellanox.com, jglisse@redhat.com, jhubbard@nvidia.com, linux-mm@kvack.org, mhocko@kernel.org, mm-commits@vger.kernel.org, rcampbell@nvidia.com, torvalds@linux-foundation.org Subject: [patch 066/118] mm/migrate: clean up some minor coding style Message-ID: <20200131061441.7Lu59jFO0%akpm@linux-foundation.org> In-Reply-To: <20200130221021.5f0211c56346d5485af07923@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Ralph Campbell Subject: mm/migrate: clean up some minor coding style Fix some comment typos and coding style clean up in preparation for the next patch. No functional changes. Link: http://lkml.kernel.org/r/20200107211208.24595-3-rcampbell@nvidia.com Signed-off-by: Ralph Campbell Acked-by: Chris Down Reviewed-by: Christoph Hellwig Cc: Jerome Glisse Cc: John Hubbard Cc: Jason Gunthorpe Cc: Bharata B Rao Cc: Michal Hocko Signed-off-by: Andrew Morton --- mm/migrate.c | 34 +++++++++++++--------------------- 1 file changed, 13 insertions(+), 21 deletions(-) --- a/mm/migrate.c~mm-migrate-clean-up-some-minor-coding-style +++ a/mm/migrate.c @@ -986,7 +986,7 @@ static int move_to_new_page(struct page } /* - * Anonymous and movable page->mapping will be cleard by + * Anonymous and movable page->mapping will be cleared by * free_pages_prepare so don't reset it here for keeping * the type to work PageAnon, for example. */ @@ -1199,8 +1199,7 @@ out: /* * A page that has been migrated has all references * removed and will be freed. A page that has not been - * migrated will have kepts its references and be - * restored. + * migrated will have kept its references and be restored. */ list_del(&page->lru); @@ -2779,27 +2778,18 @@ static void migrate_vma_insert_page(stru if (pte_present(*ptep)) { unsigned long pfn = pte_pfn(*ptep); - if (!is_zero_pfn(pfn)) { - pte_unmap_unlock(ptep, ptl); - mem_cgroup_cancel_charge(page, memcg, false); - goto abort; - } + if (!is_zero_pfn(pfn)) + goto unlock_abort; flush = true; - } else if (!pte_none(*ptep)) { - pte_unmap_unlock(ptep, ptl); - mem_cgroup_cancel_charge(page, memcg, false); - goto abort; - } + } else if (!pte_none(*ptep)) + goto unlock_abort; /* - * Check for usefaultfd but do not deliver the fault. Instead, + * Check for userfaultfd but do not deliver the fault. Instead, * just back off. */ - if (userfaultfd_missing(vma)) { - pte_unmap_unlock(ptep, ptl); - mem_cgroup_cancel_charge(page, memcg, false); - goto abort; - } + if (userfaultfd_missing(vma)) + goto unlock_abort; inc_mm_counter(mm, MM_ANONPAGES); page_add_new_anon_rmap(page, vma, addr, false); @@ -2823,6 +2813,9 @@ static void migrate_vma_insert_page(stru *src = MIGRATE_PFN_MIGRATE; return; +unlock_abort: + pte_unmap_unlock(ptep, ptl); + mem_cgroup_cancel_charge(page, memcg, false); abort: *src &= ~MIGRATE_PFN_MIGRATE; } @@ -2855,9 +2848,8 @@ void migrate_vma_pages(struct migrate_vm } if (!page) { - if (!(migrate->src[i] & MIGRATE_PFN_MIGRATE)) { + if (!(migrate->src[i] & MIGRATE_PFN_MIGRATE)) continue; - } if (!notified) { notified = true;