From patchwork Mon Sep 21 21:17:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11791143 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5EA9D59D for ; Mon, 21 Sep 2020 21:18:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1342E23A61 for ; Mon, 21 Sep 2020 21:18:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Wvg+SllN" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1342E23A61 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 35DAE900004; Mon, 21 Sep 2020 17:18:02 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 334858E0001; Mon, 21 Sep 2020 17:18:02 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1FCA1900004; Mon, 21 Sep 2020 17:18:02 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0198.hostedemail.com [216.40.44.198]) by kanga.kvack.org (Postfix) with ESMTP id 07D3C8E0001 for ; Mon, 21 Sep 2020 17:18:02 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id C6C38180AD804 for ; Mon, 21 Sep 2020 21:18:01 +0000 (UTC) X-FDA: 77288331162.20.frame56_501293b27148 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id 9FBB4180C07A3 for ; Mon, 21 Sep 2020 21:18:01 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30012:30036:30054:30070:30079:30090,0,RBL:205.139.110.61:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100;04yrb4tifwc9siphug6o4bs4ru7ugypzir9txhhz98go75mrs6f3ef97g5fs6fm.8kkd7nzp4kmudka5knpir3d1frhx8cj8rt4xbnarafxh56rauqcsy1b1endtawo.6-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:1:0,LFtime:22,LUA_SUMMARY:none X-HE-Tag: frame56_501293b27148 X-Filterd-Recvd-Size: 8323 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [205.139.110.61]) by imf04.hostedemail.com (Postfix) with ESMTP for ; Mon, 21 Sep 2020 21:18:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1600723080; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wTIN/0m82g/0tsiCt+76p+QfS6VOaJI0W6pqKf7k5hA=; b=Wvg+SllNrXESrbDyVo6g5NUGR6OoJ4l+cMVuu4sBPT6GymfwpITdYZfDlqvt5sr+qsqScT 8SRjrbaXUKrLfBenefTNEsYNWQ8mGQ3YQ2M3gYMtc1pheeJgmYzjZZWGfHJewSjNFguRae aJYFcNsaE/MfVTmhs2FX0KMt0CnleBA= Received: from mail-qv1-f72.google.com (mail-qv1-f72.google.com [209.85.219.72]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-374-uVKSlrjzMHq7LFJWNSwQGg-1; Mon, 21 Sep 2020 17:17:58 -0400 X-MC-Unique: uVKSlrjzMHq7LFJWNSwQGg-1 Received: by mail-qv1-f72.google.com with SMTP id di5so10035777qvb.13 for ; Mon, 21 Sep 2020 14:17:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wTIN/0m82g/0tsiCt+76p+QfS6VOaJI0W6pqKf7k5hA=; b=VhfKrvm347Vw58fpq43ffL7v/cVt2LlqtpAdKkkyhDPBmYwMfqJvAI02kvlplsBuIf 9+zzzpnNo722OOqk/VMBoCpl/Iy2PHJ0ycUJFKfxu3ezfsQ4yD94HRW0FjM2xUikL7ej K1Eo199ZU107uTToJX20+kyXbvGud//B8WIT9ZYfcx9Ki3f0oEr1oTMrBxDG2KuX9wp4 JCYyHCvnjC3OgeSISxb+5/IEgws0NR2iMfFOoh0QEfLIOk/tVtVgGJA8+hmnPzZJy4af HcEdt1oiFNYzEcGeTBlGKDUoE75YRqPJqoPFG9hhsp2IR2lMqV+Y8JD+AMfjaDQ5T7Wy 1dOg== X-Gm-Message-State: AOAM5335VC5ydTg4vMuqablzQQ4zWzQTOwHQkRWOZUOxu0KgY3KKqJlz mPPTLqZWzToivYBIQWNwXXOfjhkBojlJ74QrktO9eQkG0WCy5zwLMf/+td9JKtOmfNyq1FWD/BA JU+4ArFcck+A= X-Received: by 2002:aed:2703:: with SMTP id n3mr1534241qtd.235.1600723077748; Mon, 21 Sep 2020 14:17:57 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzt8RshDX+lvoNSrWZH3Xg5icn5cC7XCPVU5E54lefll5SNgPSjaTbuR6wg7sno/CYs5QBTBg== X-Received: by 2002:aed:2703:: with SMTP id n3mr1534216qtd.235.1600723077485; Mon, 21 Sep 2020 14:17:57 -0700 (PDT) Received: from xz-x1.redhat.com (bras-vprn-toroon474qw-lp130-11-70-53-122-15.dsl.bell.ca. [70.53.122.15]) by smtp.gmail.com with ESMTPSA id h68sm10225108qkf.30.2020.09.21.14.17.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Sep 2020 14:17:56 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Jason Gunthorpe , Andrew Morton , Jan Kara , Michal Hocko , Kirill Tkhai , Kirill Shutemov , Hugh Dickins , Peter Xu , Christoph Hellwig , Andrea Arcangeli , John Hubbard , Oleg Nesterov , Leon Romanovsky , Linus Torvalds , Jann Horn Subject: [PATCH 3/5] mm: Rework return value for copy_one_pte() Date: Mon, 21 Sep 2020 17:17:42 -0400 Message-Id: <20200921211744.24758-4-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200921211744.24758-1-peterx@redhat.com> References: <20200921211744.24758-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There's one special path for copy_one_pte() with swap entries, in which add_swap_count_continuation(GFP_ATOMIC) might fail. In that case we'll return the swp_entry_t so that the caller will release the locks and redo the same thing with GFP_KERNEL. It's confusing when copy_one_pte() must return a swp_entry_t (even if all the ptes are non-swap entries). More importantly, we face other requirement to extend this "we need to do something else, but without the locks held" case. Rework the return value into something easier to understand, as defined in enum copy_mm_ret. We'll pass the swp_entry_t back using the newly introduced union copy_mm_data parameter. Another trivial change is to move the reset of the "progress" counter into the retry path, so that we'll reset it for other reasons too. This should prepare us with adding new return codes, very soon. Signed-off-by: Peter Xu --- mm/memory.c | 42 +++++++++++++++++++++++++++++------------- 1 file changed, 29 insertions(+), 13 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 7525147908c4..1530bb1070f4 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -689,16 +689,24 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr, } #endif +#define COPY_MM_DONE 0 +#define COPY_MM_SWAP_CONT 1 + +struct copy_mm_data { + /* COPY_MM_SWAP_CONT */ + swp_entry_t entry; +}; + /* * copy one vm_area from one task to the other. Assumes the page tables * already present in the new task to be cleared in the whole range * covered by this vma. */ -static inline unsigned long +static inline int copy_one_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, pte_t *dst_pte, pte_t *src_pte, struct vm_area_struct *vma, - unsigned long addr, int *rss) + unsigned long addr, int *rss, struct copy_mm_data *data) { unsigned long vm_flags = vma->vm_flags; pte_t pte = *src_pte; @@ -709,8 +717,10 @@ copy_one_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, swp_entry_t entry = pte_to_swp_entry(pte); if (likely(!non_swap_entry(entry))) { - if (swap_duplicate(entry) < 0) - return entry.val; + if (swap_duplicate(entry) < 0) { + data->entry = entry; + return COPY_MM_SWAP_CONT; + } /* make sure dst_mm is on swapoff's mmlist. */ if (unlikely(list_empty(&dst_mm->mmlist))) { @@ -809,7 +819,7 @@ copy_one_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, out_set_pte: set_pte_at(dst_mm, addr, dst_pte, pte); - return 0; + return COPY_MM_DONE; } static int copy_pte_range(struct mm_struct *dst_mm, struct mm_struct *src_mm, @@ -820,9 +830,9 @@ static int copy_pte_range(struct mm_struct *dst_mm, struct mm_struct *src_mm, pte_t *orig_src_pte, *orig_dst_pte; pte_t *src_pte, *dst_pte; spinlock_t *src_ptl, *dst_ptl; - int progress = 0; + int progress, copy_ret = COPY_MM_DONE; int rss[NR_MM_COUNTERS]; - swp_entry_t entry = (swp_entry_t){0}; + struct copy_mm_data data; again: init_rss_vec(rss); @@ -837,6 +847,7 @@ static int copy_pte_range(struct mm_struct *dst_mm, struct mm_struct *src_mm, orig_dst_pte = dst_pte; arch_enter_lazy_mmu_mode(); + progress = 0; do { /* * We are holding two locks at this point - either of them @@ -852,9 +863,9 @@ static int copy_pte_range(struct mm_struct *dst_mm, struct mm_struct *src_mm, progress++; continue; } - entry.val = copy_one_pte(dst_mm, src_mm, dst_pte, src_pte, - vma, addr, rss); - if (entry.val) + copy_ret = copy_one_pte(dst_mm, src_mm, dst_pte, src_pte, + vma, addr, rss, &data); + if (copy_ret != COPY_MM_DONE) break; progress += 8; } while (dst_pte++, src_pte++, addr += PAGE_SIZE, addr != end); @@ -866,13 +877,18 @@ static int copy_pte_range(struct mm_struct *dst_mm, struct mm_struct *src_mm, pte_unmap_unlock(orig_dst_pte, dst_ptl); cond_resched(); - if (entry.val) { - if (add_swap_count_continuation(entry, GFP_KERNEL) < 0) + switch (copy_ret) { + case COPY_MM_SWAP_CONT: + if (add_swap_count_continuation(data.entry, GFP_KERNEL) < 0) return -ENOMEM; - progress = 0; + break; + default: + break; } + if (addr != end) goto again; + return 0; }