From patchwork Fri Jan 18 23:51:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wei Yang X-Patchwork-Id: 10771841 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1F6DE6C5 for ; Fri, 18 Jan 2019 23:51:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0F30130241 for ; Fri, 18 Jan 2019 23:51:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 034043035F; Fri, 18 Jan 2019 23:51:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 58800303DB for ; Fri, 18 Jan 2019 23:51:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6C5968E002D; Fri, 18 Jan 2019 18:51:34 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 674498E0002; Fri, 18 Jan 2019 18:51:34 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 563F38E002D; Fri, 18 Jan 2019 18:51:34 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f199.google.com (mail-pf1-f199.google.com [209.85.210.199]) by kanga.kvack.org (Postfix) with ESMTP id 171AB8E0002 for ; Fri, 18 Jan 2019 18:51:34 -0500 (EST) Received: by mail-pf1-f199.google.com with SMTP id d18so11304516pfe.0 for ; Fri, 18 Jan 2019 15:51:34 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id; bh=F7E5MgRIqgwnvFdFgEQ0pYN5JicXENWQ3fm151K1HbA=; b=jmMymNC3TLhGp6bceh+argkPNGp5JD4QN/Akm4uXsN+rW1xXijZhLJ0KQdlarVOncS xcPW5a/cnuF8YQfbdjpGojH09h3Rvo/8vIHQYssOr0beMj8HT0gYU2BMq4H3L1DaO+8n PVxncF+iuz9uGFbpuyfstvIbHicN6x8g0R1gpbnLaGfHcKeHCJcvX2C2nqudi5P/EF4p zsDWgznQZGb/RwCBd41FNlQlXwHaHaBHamfpRc1+3oNeJB1s8JVX0Ub2O/9Tv4Fqq7hr uzknObHR1NKzPxAsAd+lBAgibp6nBg2c+w2idJQbIFkojCsIEEkYAjc40Xft+I/WeTCv rYLg== X-Gm-Message-State: AJcUukc2OwDTiDRrHj4xmOvLrhE+A+qiUiS0Kghyp2SARJNwhtDJMNLf Zj83uckH1i/vZcJSFNyG7htF7bLqFotRiWDgSWE8XSYs/NItzimqB5QfmTa4V4+F8g0YFbVstKZ ko27fw3jDhECAX3dibD+xxDliFUVu9L9RprYam89IWtNvpE2JIvovd8CU/DVQRNJodqxkXdW3um 6rYMe8qybSyGxAOYMQe6x61FjfKRn+bjqSiShobxw5+oQmGmqmm0QVRdzuTBtsB9vWWlcV4FYqv mI2p8obc148EpyGHDyBYr3vXMtSuCEEIWUPG5JQRnn54ws1NixaLyr2aQTNbkBkGY0ik2nLKmll y+DFdaFW4LB/AXQjnknPZC4SQOu4g7pplPM9auMkg/7Pjh3ifnGed2rgHD6prVPmOYLoZoNHn0F t X-Received: by 2002:a62:868b:: with SMTP id x133mr22416168pfd.252.1547855493719; Fri, 18 Jan 2019 15:51:33 -0800 (PST) X-Received: by 2002:a62:868b:: with SMTP id x133mr22416139pfd.252.1547855492636; Fri, 18 Jan 2019 15:51:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547855492; cv=none; d=google.com; s=arc-20160816; b=MGbnyqmAEOg78NHtroy4Kbr6ct0Fxc1dh5abySly8dYLCvQEnLJr4qkoEMDFJh5WNw DhO1P1uM2KCcuzzl0uhfuzW5mM9WErkDEgsT72gqTO8RlGqUsx/4EYCjuq9d3Q1XX1RJ eEIRaCuwaSahy976CPdDQ50hBk2W3boGL47d59H0ol9WDRR8U+drgetco8S3p7timx4+ t7xQE77ySmQUJjsw1ZDoTJbuMRHiOzLFZpFa2K6xXeKyyntf5540JyJhC59TpWsd6mGY YRZD3BrQnnkeCEyiUck+vNJ2kRcEr+GJthRRF3TBzS4j7GkozGwuZb1J3ElRsWqveuuJ yDwA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:date:subject:cc:to:from:dkim-signature; bh=F7E5MgRIqgwnvFdFgEQ0pYN5JicXENWQ3fm151K1HbA=; b=IWpgD1CGuAC2V8Avf9WTAOsts/AvF8O1JOrdEmYpC3sf2BDbPqEpsJvrnISUpSePg5 5dtNYYVo7rOT67WYxSlns2xl3wIYOpCQYsJIJLZ8BntFd774pUF5cyUkMTT3fibSPTdC /PC8zlparoHCtAlw7LtHfH7SO0Z7/Z9j+vpR78DkhtyYtIUjrmvCMl80xVP/v21K/Qag sgSGYuVwKK/KhZcbpGEY7YPJT/G8C44wIAO//SIepEvjWTVv/8PHw6vnnDAPR0CjHXLp gJgqkARO+BCYTgkAHFbPk6GSdyrI+EY8vOn9E7zu4wsdaNFHok7drBhnQWXXU3cMMtv+ hMew== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b="dtry5nd/"; spf=pass (google.com: domain of richard.weiyang@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=richard.weiyang@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id w12sor8876648plq.62.2019.01.18.15.51.32 for (Google Transport Security); Fri, 18 Jan 2019 15:51:32 -0800 (PST) Received-SPF: pass (google.com: domain of richard.weiyang@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b="dtry5nd/"; spf=pass (google.com: domain of richard.weiyang@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=richard.weiyang@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=F7E5MgRIqgwnvFdFgEQ0pYN5JicXENWQ3fm151K1HbA=; b=dtry5nd/bc7XL75QsXbFCEbatzo3p2TPnLhiGjQzsu0Ka55qHrAvwap8MlMIZ7Rshr /1qt5Oqe9fuVd1UYEFY2f+743eP5WynsRvknZgpTeruhQ7b0cBd/04xXI6tsD8Aiij6/ ZPg3nnh5ZXGxvY95V0O6s/ZIQLUxVFBE5FBWembawKVlYn10hZmalZaG5aHLklnUvrSN 4JOMRn5KpOSgZBpAjOsuPsVbWJnELD3HSjgGpL5eXNxQDeHGaD9zlHQFVXtJoUNdX6ZX 4026U5zQnmWHXSbjNkaaGFxdvy6BhLEDHDLvAWWevJYVxDgebuxI3FGVoPhJv+OigdCl ytuw== X-Google-Smtp-Source: ALg8bN5UyGleMffLr+2y7hPoNS6sJkMq1CXpV18ff+ZL/vlfdbB1bJiWI7iI/RNxP/XVj+GA292Fiw== X-Received: by 2002:a17:902:6acc:: with SMTP id i12mr20612962plt.148.1547855491946; Fri, 18 Jan 2019 15:51:31 -0800 (PST) Received: from localhost ([185.92.221.13]) by smtp.gmail.com with ESMTPSA id g11sm6973306pfo.139.2019.01.18.15.51.30 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 18 Jan 2019 15:51:31 -0800 (PST) From: Wei Yang To: linux-mm@kvack.org Cc: akpm@linux-foundation.org, mhocko@suse.com, cl@linux.com, penberg@kernel.org, rientjes@google.com, Wei Yang Subject: [PATCH] mm: fix some typo scatter in mm directory Date: Sat, 19 Jan 2019 07:51:23 +0800 Message-Id: <20190118235123.27843-1-richard.weiyang@gmail.com> X-Mailer: git-send-email 2.15.1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP No functional change. Signed-off-by: Wei Yang Reviewed-by: Pekka Enberg Acked-by: Mike Rapoport --- include/linux/mmzone.h | 2 +- mm/migrate.c | 2 +- mm/mmap.c | 8 ++++---- mm/page_alloc.c | 4 ++-- mm/slub.c | 2 +- mm/vmscan.c | 2 +- 6 files changed, 10 insertions(+), 10 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 842f9189537b..faf8cf60f900 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -1299,7 +1299,7 @@ void memory_present(int nid, unsigned long start, unsigned long end); /* * If it is possible to have holes within a MAX_ORDER_NR_PAGES, then we - * need to check pfn validility within that MAX_ORDER_NR_PAGES block. + * need to check pfn validity within that MAX_ORDER_NR_PAGES block. * pfn_valid_within() should be used in this case; we optimise this away * when we have no holes within a MAX_ORDER_NR_PAGES block. */ diff --git a/mm/migrate.c b/mm/migrate.c index a16b15090df3..2122f38f569e 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -100,7 +100,7 @@ int isolate_movable_page(struct page *page, isolate_mode_t mode) /* * Check PageMovable before holding a PG_lock because page's owner * assumes anybody doesn't touch PG_lock of newly allocated page - * so unconditionally grapping the lock ruins page's owner side. + * so unconditionally grabbing the lock ruins page's owner side. */ if (unlikely(!__PageMovable(page))) goto out_putpage; diff --git a/mm/mmap.c b/mm/mmap.c index f901065c4c64..55b8e6b55738 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -438,7 +438,7 @@ static void vma_gap_update(struct vm_area_struct *vma) { /* * As it turns out, RB_DECLARE_CALLBACKS() already created a callback - * function that does exacltly what we want. + * function that does exactly what we want. */ vma_gap_callbacks_propagate(&vma->vm_rb, NULL); } @@ -1012,7 +1012,7 @@ static inline int is_mergeable_vma(struct vm_area_struct *vma, * VM_SOFTDIRTY should not prevent from VMA merging, if we * match the flags but dirty bit -- the caller should mark * merged VMA as dirty. If dirty bit won't be excluded from - * comparison, we increase pressue on the memory system forcing + * comparison, we increase pressure on the memory system forcing * the kernel to generate new VMAs when old one could be * extended instead. */ @@ -1115,7 +1115,7 @@ can_vma_merge_after(struct vm_area_struct *vma, unsigned long vm_flags, * PPPP NNNN PPPPPPPPPPPP PPPPPPPPNNNN PPPPNNNNNNNN * might become case 1 below case 2 below case 3 below * - * It is important for case 8 that the the vma NNNN overlapping the + * It is important for case 8 that the vma NNNN overlapping the * region AAAA is never going to extended over XXXX. Instead XXXX must * be extended in region AAAA and NNNN must be removed. This way in * all cases where vma_merge succeeds, the moment vma_adjust drops the @@ -1645,7 +1645,7 @@ SYSCALL_DEFINE1(old_mmap, struct mmap_arg_struct __user *, arg) #endif /* __ARCH_WANT_SYS_OLD_MMAP */ /* - * Some shared mappigns will want the pages marked read-only + * Some shared mappings will want the pages marked read-only * to track write events. If so, we'll downgrade vm_page_prot * to the private version (using protection_map[] without the * VM_SHARED bit). diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d7073cedd087..43ceb2481ad5 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -7493,7 +7493,7 @@ static void __setup_per_zone_wmarks(void) * value here. * * The WMARK_HIGH-WMARK_LOW and (WMARK_LOW-WMARK_MIN) - * deltas control asynch page reclaim, and so should + * deltas control async page reclaim, and so should * not be capped for highmem. */ unsigned long min_pages; @@ -7970,7 +7970,7 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count, /* * Hugepages are not in LRU lists, but they're movable. - * We need not scan over tail pages bacause we don't + * We need not scan over tail pages because we don't * handle each tail page individually in migration. */ if (PageHuge(page)) { diff --git a/mm/slub.c b/mm/slub.c index 1e3d0ec4e200..c3738f671a0c 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2111,7 +2111,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, if (!lock) { lock = 1; /* - * Taking the spinlock removes the possiblity + * Taking the spinlock removes the possibility * that acquire_slab() will see a slab page that * is frozen */ diff --git a/mm/vmscan.c b/mm/vmscan.c index a714c4f800e9..1b573812e546 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3537,7 +3537,7 @@ static bool kswapd_shrink_node(pg_data_t *pgdat, * * kswapd scans the zones in the highmem->normal->dma direction. It skips * zones which have free_pages > high_wmark_pages(zone), but once a zone is - * found to have free_pages <= high_wmark_pages(zone), any page is that zone + * found to have free_pages <= high_wmark_pages(zone), any page in that zone * or lower is eligible for reclaim until at least one usable zone is * balanced. */