From patchwork Wed Jun 28 17:25:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13296084 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C9ADEB64D7 for ; Wed, 28 Jun 2023 17:25:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 183828D0003; Wed, 28 Jun 2023 13:25:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 15AD98D0001; Wed, 28 Jun 2023 13:25:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F18718D0003; Wed, 28 Jun 2023 13:25:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id E424A8D0001 for ; Wed, 28 Jun 2023 13:25:39 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id B7B55A02E1 for ; Wed, 28 Jun 2023 17:25:39 +0000 (UTC) X-FDA: 80952833598.26.4C17A04 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf25.hostedemail.com (Postfix) with ESMTP id 8A249A0022 for ; Wed, 28 Jun 2023 17:25:36 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=e6SPrWEk; spf=pass (imf25.hostedemail.com: domain of 3D22cZAYKCIY241oxlqyyqvo.mywvsx47-wwu5kmu.y1q@flex--surenb.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3D22cZAYKCIY241oxlqyyqvo.mywvsx47-wwu5kmu.y1q@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687973136; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=KMxqHAnxOZxMzgoGe7kH228DI7ZaEMF5Tfki0D7IS+4=; b=Wk4WxwRsv+3hJEQqD6hkh6qkQyyhqDxtIIhtJVZ2C3xR+qd3TPbc4fJ0e0Q1cMCXYnA23r gf6/cdsLCwnFRebjhvH3sUgo3A8vCwKapzDxOQOwxYyEnWcmtcoIWRpmrbabXtHPTlwKNs Ek8N1DAXEotBDg9Lt9iu4jqBa8II1AA= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=e6SPrWEk; spf=pass (imf25.hostedemail.com: domain of 3D22cZAYKCIY241oxlqyyqvo.mywvsx47-wwu5kmu.y1q@flex--surenb.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3D22cZAYKCIY241oxlqyyqvo.mywvsx47-wwu5kmu.y1q@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687973136; a=rsa-sha256; cv=none; b=T66h8clFmOeJPEIF1HfG02tSj5BFK2YaTNbfnc251+2RCt62u+CL7NKggT0Q6qn4LxE0xm ACbmow4OThi+4fYGOZLW3kwVJGQtOk5KmjM+f7upuoKlfSSpnOGXUN+fjMeqL+wW7ltqVN NtnCs+qQg5GR1+nPu+VHIOXBCIiDAoA= Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-c17812e30b4so5309429276.1 for ; Wed, 28 Jun 2023 10:25:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1687973135; x=1690565135; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=KMxqHAnxOZxMzgoGe7kH228DI7ZaEMF5Tfki0D7IS+4=; b=e6SPrWEkGiS7vhpHhhy37EcTRVrgLixp2j9nc9EBXVKej7rIbih+NkwJb1MJn3rRDX 5muFPgNOpa6ZEK4o/0O1g9m3lbEwIyCcXkljX26TsSqupAwnhpmWQRx8CDSGC558JJIO zsfH9ZLyWCPVb029aawxSuzo4keH1XGw00BY/iVUfHzSafcc2ABx8vXH3xmVaWp7skVn YIYwuCss3vEqI8yrAYLu29TIrCDFEXYAgjNrcxHG3KB3oPG2oESX91b3haK9+Q4XKU6Z styvaJQzhk+b5f88CjLax8vu0Fc8jk/0sma4xwI+oS81IE2WBsB+5IiiWsfpeaCvJ4UY vfrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687973135; x=1690565135; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KMxqHAnxOZxMzgoGe7kH228DI7ZaEMF5Tfki0D7IS+4=; b=Mv1nWhSJDY5WLyKkkz9oTDGfgDh4iwvY4Il2tJYEvmVXmIf7TdMEG+VvmkRGEAJ09A yJ+9FN8Xc/9cInhDqCwG3/V+p+BGOd+GNrna7GTOncX7iK9tf56CB5RDs+iWwHTTMNWT 6d85d0Fj8OPoiyi/95sh7BJe34tGWAbx6w0QJWAGWnInN6Ynp/2q8eGNLBfiTI5AM8r9 bJ/WPFDDNTTBAaIa8ZWyqAlwz0x5CzPknJ+alYh96rtnzN82x46XmxMcRfVFkgkGndh1 FPTpJ2hF2u1ZhMs6EYVMRGWWNS8nMJ8z1kskd9GaEjlsHyq2ZpjaciU2rSVbQTKm/1oM 9KfA== X-Gm-Message-State: AC+VfDy7Pp2XX4U1QUzMashV1d+Hasjv8Sg43oqCsooDcQHrT6XIChu7 Q3mR1B9hd8Biwmn0W1+iKXYzS3Rpa3A= X-Google-Smtp-Source: ACHHUZ6N7gWosjP8G3HWgmkBJCbFxP8A/hCoy48IKxEWFUFucTCUbC4VUp9YtI79x0Uj4BJMkP4VIAbKaOk= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:eea3:e898:7d7a:1125]) (user=surenb job=sendgmr) by 2002:a25:2f95:0:b0:c15:cbd1:60dd with SMTP id v143-20020a252f95000000b00c15cbd160ddmr3699476ybv.5.1687973135437; Wed, 28 Jun 2023 10:25:35 -0700 (PDT) Date: Wed, 28 Jun 2023 10:25:24 -0700 In-Reply-To: <20230628172529.744839-1-surenb@google.com> Mime-Version: 1.0 References: <20230628172529.744839-1-surenb@google.com> X-Mailer: git-send-email 2.41.0.162.gfafddb0af9-goog Message-ID: <20230628172529.744839-2-surenb@google.com> Subject: [PATCH v5 1/6] swap: remove remnants of polling from read_swap_cache_async From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, peterx@redhat.com, ying.huang@intel.com, david@redhat.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, pasha.tatashin@soleen.com, surenb@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com, Christoph Hellwig X-Rspamd-Queue-Id: 8A249A0022 X-Rspam-User: X-Stat-Signature: pyzhf9nwhof7sf4bngirusw36nshemr6 X-Rspamd-Server: rspam01 X-HE-Tag: 1687973136-371353 X-HE-Meta: U2FsdGVkX18pap4mKg50R85wQ0Eze3itL3oVemAnxH2J2ekLCHdVGLtTjSKoeMMoSPmj3xp1U60bKX+qRMF2gUz1LFUlKmrJDLq+ssl60D7uD6QpLLE7+hyZSsNsA8I2mpcNIbPqxfN+VROEGoONacH7Bqk8t2ehfD3iwxORbp/nkNLnptr0Iqp1xbpVolwU8bCIaGbPg9XjcmK9hZ6I9i7a8RHw70NMpnhUAs1KWroaS1oBe7J2jF++vrD3uwC9Y7MEnfsnGigrQ5hjGpEVPqAjWINSfTHgJXdGeYmWB8CgnAEFc8Nrq37FICCrQCuLKd0y75CT1SSS/rCgULz/t4oQuDDDPu66zs5j0w6PXNtx+XIXamk6jHYTwGjdC5lxn1BpDYptt7P+EbIH0yIwuSvhqQ4GiCS4JDrDwqjaXWDXeevIYZT+daV3kvPtjHimBxW1oJvvF/BtwAKyjv8uCcaf/jKtVViqmXLlTsD/IqCTFIYA6PtTnPXamJxPx0yiO3QuhP5Nxzwjg4ZE/M3yn+KkBZQ7kZQrk7h+Onx2qNUh3a6w7qgpbkp2rzw4HHeLP9HJWtpGsfeNbHk+AA8YElBkYLuFzKYMfUFcEZjqkaBn6urpLiEjSKwZi6uDt1w7SHb8W4kkrtTvB+SAxFpMfe3wSA6eMZ8xIZ1X34d7TdMuRx1MAhM0y+MyuRKkFkW7DrQdn8ymRH1/+R9V8WjvtRDC5WQoLSZnSrFi08R2G3gtJt9vEfZ7T++dMf+a78BlOt9VXn6vwUMUolKAyn/TwWx13U6Ib+r+VC/VJztbc/8bYkGHvXLzH6F1yfaZMLNoBUSHuzoLvAP4NL+9yA3nTsofCdr23tAJQtYLDVVje/jdpUFoUnvaMNOb7caGO0ZJEXHDWR1VyvalqowmnF2a4CJe//JQQCaoP6N6O9fYkGOk1sgaZ1RCxt8cS4lKgogm8+BZtmo6g49Kq48Cx2U eVVQTZW2 8esrX5wn8xUTvl3K/VekhNEDt/fQmfCdp3NDxKBLVS11aibV3avgZk5zfnlohsCbqEs6JNkpKbLvbyRSbWNoD8Xy+UmKuEgsHbz4UQGqwZDi3XKf6oW6/q3z1w7nORlE059ZBYaN8b+16Bmx7lepauH96bumaOTKyv7Gp5/LDddemx2PAWJ5GDBuSmFzxDNd6dVEhHnnSOCmPraEO35FUKi15FdwzpMWxa9lwJU+4BTkAktN51h/h9PvlUM2qfyep0IeXcWcmFGLcS8XNxv7Q0IIdzo9zI6aMxw4B+W+wwAeKsYNgWQzgsVePZAbiDY8suFCkob+5fTQM63aD66gTWDuoeneLd79Ce6ZZ+D1O+LJUKMW4BdG4/beIjiAaJr3Yt1F3Qqn8vhi/BRuekAMqqAnoWdiA41l7T15pE+iThDI+t1cCSN9ehKM/sqm0BKr5s8WFWgGBRF6EwdGyxrXyVr3O002HN/J2BjhAgB07qIQ4I7VtubUVGu2QPNaxFEcKTBsdq+xeN+cSODcXReJsRWo3A/LKl8958Ql90wCDARrY3q4iKJXfBKfdxPl9f+8SzqWOEF3hgJCWtV/4rQpmURRathpDH3F/s0GgltOwHUemUNxve92SdyVS0tRn2L2RD1mD6kiYOlzwbz/0xLMQRUs7xGPpoKbpYHeb6ZWmBxgHg3uFJ3ubmFAYRQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Commit [1] introduced IO polling support duding swapin to reduce swap read latency for block devices that can be polled. However later commit [2] removed polling support. Therefore it seems safe to remove do_poll parameter in read_swap_cache_async and always call swap_readpage with synchronous=false waiting for IO completion in folio_lock_or_retry. [1] commit 23955622ff8d ("swap: add block io poll in swapin path") [2] commit 9650b453a3d4 ("block: ignore RWF_HIPRI hint for sync dio") Suggested-by: "Huang, Ying" Signed-off-by: Suren Baghdasaryan Reviewed-by: "Huang, Ying" Reviewed-by: Christoph Hellwig --- mm/madvise.c | 4 ++-- mm/swap.h | 1 - mm/swap_state.c | 12 +++++------- 3 files changed, 7 insertions(+), 10 deletions(-) diff --git a/mm/madvise.c b/mm/madvise.c index b5ffbaf616f5..b1e8adf1234e 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -215,7 +215,7 @@ static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long start, continue; page = read_swap_cache_async(entry, GFP_HIGHUSER_MOVABLE, - vma, index, false, &splug); + vma, index, &splug); if (page) put_page(page); } @@ -252,7 +252,7 @@ static void force_shm_swapin_readahead(struct vm_area_struct *vma, rcu_read_unlock(); page = read_swap_cache_async(swap, GFP_HIGHUSER_MOVABLE, - NULL, 0, false, &splug); + NULL, 0, &splug); if (page) put_page(page); diff --git a/mm/swap.h b/mm/swap.h index 7c033d793f15..8a3c7a0ace4f 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -46,7 +46,6 @@ struct folio *filemap_get_incore_folio(struct address_space *mapping, struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma, unsigned long addr, - bool do_poll, struct swap_iocb **plug); struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma, diff --git a/mm/swap_state.c b/mm/swap_state.c index b76a65ac28b3..a3839de71f3f 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -517,15 +517,14 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, */ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma, - unsigned long addr, bool do_poll, - struct swap_iocb **plug) + unsigned long addr, struct swap_iocb **plug) { bool page_was_allocated; struct page *retpage = __read_swap_cache_async(entry, gfp_mask, vma, addr, &page_was_allocated); if (page_was_allocated) - swap_readpage(retpage, do_poll, plug); + swap_readpage(retpage, false, plug); return retpage; } @@ -620,7 +619,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, struct swap_info_struct *si = swp_swap_info(entry); struct blk_plug plug; struct swap_iocb *splug = NULL; - bool do_poll = true, page_allocated; + bool page_allocated; struct vm_area_struct *vma = vmf->vma; unsigned long addr = vmf->address; @@ -628,7 +627,6 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, if (!mask) goto skip; - do_poll = false; /* Read a page_cluster sized and aligned cluster around offset. */ start_offset = offset & ~mask; end_offset = offset | mask; @@ -660,7 +658,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, lru_add_drain(); /* Push any new pages onto the LRU now */ skip: /* The page was likely read above, so no need for plugging here */ - return read_swap_cache_async(entry, gfp_mask, vma, addr, do_poll, NULL); + return read_swap_cache_async(entry, gfp_mask, vma, addr, NULL); } int init_swap_address_space(unsigned int type, unsigned long nr_pages) @@ -825,7 +823,7 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask, skip: /* The page was likely read above, so no need for plugging here */ return read_swap_cache_async(fentry, gfp_mask, vma, vmf->address, - ra_info.win == 1, NULL); + NULL); } /** From patchwork Wed Jun 28 17:25:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13296085 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB87AEB64DC for ; Wed, 28 Jun 2023 17:25:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 94B568D0005; Wed, 28 Jun 2023 13:25:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8D4EF8D0001; Wed, 28 Jun 2023 13:25:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7267E8D0005; Wed, 28 Jun 2023 13:25:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 63A718D0001 for ; Wed, 28 Jun 2023 13:25:40 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 14BD2C0272 for ; Wed, 28 Jun 2023 17:25:40 +0000 (UTC) X-FDA: 80952833640.22.B92C87F Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf10.hostedemail.com (Postfix) with ESMTP id 44AD8C0009 for ; Wed, 28 Jun 2023 17:25:38 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=UokIsNFS; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf10.hostedemail.com: domain of 3EW2cZAYKCIg463qzns00sxq.o0yxuz69-yyw7mow.03s@flex--surenb.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3EW2cZAYKCIg463qzns00sxq.o0yxuz69-yyw7mow.03s@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687973138; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=83rYaWFW6QNQhfB0rTt9RkG69OVxHHwKFqolT0xWQu0=; b=FEfPkc/sRzU/9E2YkyNB6TKWEmSnSf6Jxs07vs/zmy1aTQK+9R6mC4VO1u72O4VlO/lvNr oEGk2YkxyPkj7C1aW5L25WKsDIpiaI+NKo0TOj/WDUwgWqmp4FaI/3XZag8nVtSw6fLdh4 ocZeU7HnTmP/QcAfTVwg2mJZCbU7GyQ= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=UokIsNFS; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf10.hostedemail.com: domain of 3EW2cZAYKCIg463qzns00sxq.o0yxuz69-yyw7mow.03s@flex--surenb.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3EW2cZAYKCIg463qzns00sxq.o0yxuz69-yyw7mow.03s@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687973138; a=rsa-sha256; cv=none; b=RCfCUANeEDuAsuN3+QQVGtUCxuB03v9AoQaVHnZYC/PV0Acm+5sYeK0pFT14pqykMs4bBC 5LQCmM3Qaw9uHEA625TiOof20dKkK9ht/6UG2q4cZtLQvUn7jd65IFC8GnBdnNOwOTH/s4 jWcEJWZdPBx5U4P7woFN1NTqVeFSwWY= Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-bd476ef40e0so7681810276.1 for ; Wed, 28 Jun 2023 10:25:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1687973137; x=1690565137; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=83rYaWFW6QNQhfB0rTt9RkG69OVxHHwKFqolT0xWQu0=; b=UokIsNFS7/P5ifE0g0gbJ51af5XfHkQTHzAgzLLCATuc/7MAyN3e2BIPy1izcYPkLY z5f44UMQRnCVB/vZkWWNMEwOIDkzuui+0Zwrl35SVD1vVSw69/uOHfZ8N6f04+Hc4UE8 OMS5qOGaEvHe2e7Pkp97+c129PKIZIfcz2hG05HLZYnQzbeA5Kf51CkDDRCs5Gj3bCn3 IyzanXcnxJEZy6IFmpLEMW/Hpa919YAfn1aORWnHhQljmXr6br6YiaYfYPvCtBgZON7s ICsPLNMrtcvCAid/Y2Aigd28hxg2v4s1SjgCANnNW3FUlDE6VL5Cwje43GGlxg+Y6Z5I IRYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687973137; x=1690565137; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=83rYaWFW6QNQhfB0rTt9RkG69OVxHHwKFqolT0xWQu0=; b=C9Y84rY3Id7myVJmlKtcsAZy4EKFbvLTFeczq99h2fp7WtVt7QTmcxFkmZcc3w8iHX gpww3F/FouiVIqom+n/OJBWhT2+gALgbRqjhuqMi4o5yoQhkqczf4I4iA/cfOBD38g7T MMFycGFc30qYIcimGaMeupv274pQAyq5Eikd575Ou4+77YVNOtkSSodMjGkH4dLl4J3O 2eu0kcx6K70FMERCQ3SJwu4iKS4AbLrQ+lFRqbcAtMFzKc3Y6ivaZQc+Q2TfHFOWyUNr Rq8br6afVkYC6spV9djmaWFFKjFm+f84MF+ZcEdzTRJ67kX/OAMsv0V+ef4+jIoDFhEn NeLw== X-Gm-Message-State: AC+VfDzRqAU7ggu9YyJSExkuCFr2eb/IHhe4Mc92ZWlxh+m1s5r0tamO Kgg/0Y1qNyQ5d1O3gI4mReocoBsmx98= X-Google-Smtp-Source: ACHHUZ7JjKtitm+lPkm21TpKtpBUVfUzi/WI1UUa7vSu8SsD9xW8kKDV4Pf2ivIVJgyup2CqnecVVXuo0yI= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:eea3:e898:7d7a:1125]) (user=surenb job=sendgmr) by 2002:a25:ad94:0:b0:ba6:e7ee:bb99 with SMTP id z20-20020a25ad94000000b00ba6e7eebb99mr14318213ybi.12.1687973137283; Wed, 28 Jun 2023 10:25:37 -0700 (PDT) Date: Wed, 28 Jun 2023 10:25:25 -0700 In-Reply-To: <20230628172529.744839-1-surenb@google.com> Mime-Version: 1.0 References: <20230628172529.744839-1-surenb@google.com> X-Mailer: git-send-email 2.41.0.162.gfafddb0af9-goog Message-ID: <20230628172529.744839-3-surenb@google.com> Subject: [PATCH v5 2/6] mm: add missing VM_FAULT_RESULT_TRACE name for VM_FAULT_COMPLETED From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, peterx@redhat.com, ying.huang@intel.com, david@redhat.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, pasha.tatashin@soleen.com, surenb@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 44AD8C0009 X-Stat-Signature: fo8sfcxt7z1gabc6zjef783ugxbzu8si X-Rspam-User: X-HE-Tag: 1687973138-412608 X-HE-Meta: U2FsdGVkX198vk4JtM8HiBu9zNHuHUVzFl7KS5LNRoDbAfFAAjRQjE+YI15n4Lh42jK/RF0gM+5M2XVjXAx2tck2dsV4+hQYZHpqeLoOPe7ZNLMR38R+Ydp+I4oRNKLb6C6ZdJs5cgLK256FMgt1nJkVPpUKBTfhJLTrJ8AEOmis1H/U0F7Iy1ZbpQhIh43y8ApRscz7W2p5gBBEXcJWw/sq8/TIy2DWUCMXqpSA+BBJN22fH+YA6ONVNofTMNliz/2cr8CwwnegmCFOmUeMkXR8RdVa3LZ6ah4UZWvo/J1Z6XsNdWr85AmdNmmmWqkB+PeRDHL0EtHnV2rQHQxwm9Z/LWE283bxSqGgov8uYsZREAHOW+nVJtzNkHGA2BRmdbFYSpwmDgpi47BdIUs3yEXyYxAasvh6hmJgIC3JGaxd+AYTW7J3/hMSkb7EwaOLLz4F8kNhkyGlmlJo2Bt9Da1NqI2Zj+20S3Ho+sUTqoD3I0PaKZz0floZBkFLG76MmR95ZtIBtqH3QqgConzr2StJM0wfb9yoYqY+j79ZQniQuhPhtO+3AHnrMmeHlLwF+MKtBvQXp+MBEucX7xGUU1rj76Q3BZXz8eFAkqJIPVlVZkl4UWxTefQyzD4vPcsOtzthcwuB2Kl4PZ81xxXKgyahYLrvT7bqtf5f6KXdUzXiBPbE1tG4h902QsHV73uqD2rvwk53dyTdtGwSb5F2ZHhN2MbDL6GaRUXNnX0CugyhCe9ZNoyNIvXZs4tKJPIQcVDIDrCjT+VztXfgAdAbgd+du7cYILuPM61UP5lCtC2+TtSZHmH0XalEN5boMDAgrzBa/lv5wGtxJPpk2jPIxPYHq4ymxPFEplXPh1OKraAAjtpAaCbcrQPZrSyHRYjJQeJECs6Z0vOtNf/wiVcLJo/XPlUDV7rZnRJHr8easz3+lejoFOF9VytglZR6yraiYQS5JMBm81PPykut6S/ mU0We5ss vOprdgcRXUdBzmB+p6hM5SNGlvt74CF8ufKjiA0LxR+qBsBk8jStD9eATGmP0ROTp0ENLsggkGeDP402obD5PiX3Je8tQe4uKvqcJDSPKBS65F+4gx2DwVH7qbhlYVmk4JiLnWYnidbi1KsXSLtH6i/RmsveAra2z7fv7mKPi92fFrvBW9TkoYFgE7Imt0QN9uEt42uGirqZw9wCORLYEI+4+e+D9OcIuRIjFVybjAu7LvWPt7MO3eVHYcFgfBWEmzrwluNjXD6DJ+7LXYTLgl8IZ5jrGvVLozURDdwNze2PudiIRFHaHBxGWgVdxDgsRYdZFFJqsYN8tNLuMlUn2RxMT2+nq1igvchfip/fQUxrash1av1vEevSHXbB1EkRU6PenW2ZN/dPzuP+L2QOr5QDSTaYu7Tj+QQJHD1U7drB5ILj7Vt4pWHT5BE/XaPjT2MvtQMmQedgCsTRpb3sYrVq35PlaVJI3lSfGhARfP7Fh5a5sszfEoylFRHGia/CiwA/Ze4gGaPaVm0i5YCJuRJ3TobTFV/FRtSwUQPY/OeEjve/G27PO/e3KwjYXkmjq/otNRZ8+PnpOv2zY12MUt3q5IXQfqOG/uPNoQyN/F2xttPWVFYuuDzTSMBbw3eWuDEOC3cl+NkRBwWy7r/+Z5d1j3Bvy0V7+oc5pIwZmeSN016LL/PeBApFtTg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: VM_FAULT_RESULT_TRACE should contain an element for every vm_fault_reason to be used as flag_array inside trace_print_flags_seq(). The element for VM_FAULT_COMPLETED is missing, add it. Signed-off-by: Suren Baghdasaryan Reviewed-by: Peter Xu --- include/linux/mm_types.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 306a3d1a0fa6..79765e3dd8f3 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1070,7 +1070,8 @@ enum vm_fault_reason { { VM_FAULT_RETRY, "RETRY" }, \ { VM_FAULT_FALLBACK, "FALLBACK" }, \ { VM_FAULT_DONE_COW, "DONE_COW" }, \ - { VM_FAULT_NEEDDSYNC, "NEEDDSYNC" } + { VM_FAULT_NEEDDSYNC, "NEEDDSYNC" }, \ + { VM_FAULT_COMPLETED, "COMPLETED" } struct vm_special_mapping { const char *name; /* The name, e.g. "[vdso]". */ From patchwork Wed Jun 28 17:25:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13296086 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79676EB64DA for ; Wed, 28 Jun 2023 17:25:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4B7648D0006; Wed, 28 Jun 2023 13:25:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 468FA8D0001; Wed, 28 Jun 2023 13:25:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3090E8D0006; Wed, 28 Jun 2023 13:25:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 214358D0001 for ; Wed, 28 Jun 2023 13:25:42 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id E78581A029A for ; Wed, 28 Jun 2023 17:25:41 +0000 (UTC) X-FDA: 80952833682.05.4622624 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf06.hostedemail.com (Postfix) with ESMTP id 1F9B2180017 for ; Wed, 28 Jun 2023 17:25:39 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b="f/qWBsZG"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf06.hostedemail.com: domain of 3E22cZAYKCIo685s1pu22uzs.q20zw18B-00y9oqy.25u@flex--surenb.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3E22cZAYKCIo685s1pu22uzs.q20zw18B-00y9oqy.25u@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687973140; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Etjd7yIk6y8wpf6gNh6t/+qVgTHbYuzl98QGtof+wLo=; b=wLjuWEUu9l4j6hA8b08DbiHWys1zBw9WIA0R/RnbWHPOBnjfwCcV6J4RNjHa2FMHfz1UWK G+I2lVVfHUFSp+QMBHA1IHFo/EO49hCBfZArAE9RwMGSLSYPz+Z6iViTXQxcJGH7cBx+Sf HonU5HsvXe08dZcl0Y9z/aD6HMsvBME= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b="f/qWBsZG"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf06.hostedemail.com: domain of 3E22cZAYKCIo685s1pu22uzs.q20zw18B-00y9oqy.25u@flex--surenb.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3E22cZAYKCIo685s1pu22uzs.q20zw18B-00y9oqy.25u@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687973140; a=rsa-sha256; cv=none; b=jQ1x0wgD8KCBFqOXEsGYdtiYuwhK9dJFmC/bkR9B/Yltjp5JBHgh7/kL0cokjyfCsIKDiF KThcJ01EPOLvngi11Pu2FRCw0ws+d8HX/X+jVSPC/15aPpLEStQSL5OJlPV4WRVzGbfMlV 5XI4ppuXxfS8etVx9V77CnPnp9G6KVU= Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-c22664c3df7so194659276.1 for ; Wed, 28 Jun 2023 10:25:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1687973139; x=1690565139; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Etjd7yIk6y8wpf6gNh6t/+qVgTHbYuzl98QGtof+wLo=; b=f/qWBsZGlwZ5P3Q7P10m/N0F147acALs/pNCcM/R0Pth04XyJd0edvT4fGoeFanU7i nprMaYJ57ZehrPFrJEWv3ay7af2qJ50IjmH60upDZ9HdpmFC8tlgMQahwdbr6hPxrcaH iOO6oMrTdA91rUvBSlj3mK5n37r8vBp1nCpkzIfw6tcfelQj9RfuyeoCq4XRyPGgxE2q dQMo/1n5ck9TGwQyYB/muJ7ImteNS4kZ9vd0EFUXwjlDgrqn/G4AIMcN3x+wt+TV2nOX SE48ZlUM2T4t9Cw7Ds2yMNxtlnhpPB6+PXqMHtIIOsUUg96SvUV200B4lfaag6Y6fl1F QOjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687973139; x=1690565139; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Etjd7yIk6y8wpf6gNh6t/+qVgTHbYuzl98QGtof+wLo=; b=Ym9hkLqvpa/K2qNParK/Vihx9CqLSJNB0IJ89q23j3I7sPyqtB7PB22/c6y4yB4/z8 7vhDjCQwi+TVU0GK7h3zVUprwibcOXwlZHzHe+VDaxf/BIyqtJtXrDEpI/rt7LLT97c2 SiexaLA7PwwZ+FTilrLKH6ZgVddx86TWrHD4e3zhqlkGbls6zv9pln/ILD32nJxmhqzC SBwD+O2pcHNr0sfKCJ6jcks2tgLF+uHzKU+VRFK3eV9iDNRAXwc9y9lffc8W35kHqWCT gH9b2D2G1ToctF5pEtSREoTwPIT1f+I8dTt82FHzLrqxe0boE7dwmGdyUXMCqElROsLt B9ow== X-Gm-Message-State: ABy/qLaOzH8pl7tPqlsEzoVlkaN9t3iRZEz6FzhtF7J5UVIo4GR0+ORp PcmcGFl49yX2+x6I+KYl7VxsJR09miU= X-Google-Smtp-Source: APBJJlGsLGXNMWvz9SGYOgX+1/1dEWX9HTUaDi0WsEVrC4gPxKfPFDmgxWjZwZruzhN/4I0d7h/V6yPZrxg= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:eea3:e898:7d7a:1125]) (user=surenb job=sendgmr) by 2002:a25:f501:0:b0:ba8:1646:c15d with SMTP id a1-20020a25f501000000b00ba81646c15dmr21941ybe.1.1687973139195; Wed, 28 Jun 2023 10:25:39 -0700 (PDT) Date: Wed, 28 Jun 2023 10:25:26 -0700 In-Reply-To: <20230628172529.744839-1-surenb@google.com> Mime-Version: 1.0 References: <20230628172529.744839-1-surenb@google.com> X-Mailer: git-send-email 2.41.0.162.gfafddb0af9-goog Message-ID: <20230628172529.744839-4-surenb@google.com> Subject: [PATCH v5 3/6] mm: drop per-VMA lock when returning VM_FAULT_RETRY or VM_FAULT_COMPLETED From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, peterx@redhat.com, ying.huang@intel.com, david@redhat.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, pasha.tatashin@soleen.com, surenb@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 1F9B2180017 X-Stat-Signature: 9w7x4cz4auiedee1um57fchxzk88gj5n X-Rspam-User: X-HE-Tag: 1687973139-979411 X-HE-Meta: U2FsdGVkX18CJ4y8kOCWCVZ2SoTYhI/ECiRglNGeF+LYuiva6ufM5e8blmxUVl+1t3epkaCWPf2ZNuIAaSl6Ubujz3fBstx/dVJpO2aGyfdiJC1sRzUY1K06CnuBWWtuvDwVdbR5ljr12OoGuLSo/9WNzP4xMof50AkChp6GYNig/8faiVW5QY9PKL3g1cGOnRdPRuyNsohVk+DuTEd8QoqI+1g6u9JqBlq0VQFPTGAL3o7FX3QoTM5NdzClzOsJiY8axAqrHW4fpuyjeLlC7lnHbJ6NAvdZMmObupxONReFA8ZqJYfofcYvOBi8jftjZA2gKS2wyi35+Uoy4tmu2+OBzN4RJsEQ6DLOlYL1TvGKHW/vga6Uf4m8biueRPRebrnV9gOVfwd+n4hhbzjerlUtT0s1ZJr0uXVjQlF0mdZH1rypClAKUq8aHrURjNjYma1pRr5qr++Aai+jO3cPVxysGY2mADqfCDp9/MVUCKj6cWjM2Wal5jwDnbpoDM0yBlgxGrdddoydmBuW0lYolprqAbrJJYRawGFxdovZD4ouF2gGVi0gZh1ZcDeenRUIhBE2LPAdAq11jRpq0TL2CbT6+Ln/FzLj2HpT/jVHNo73ThFMJxFjGHCfiDBqLl74/hMUdzS2ycQ9AM/hoIgAw5STwG1CPcWMoRTobvOJ7j8Ck0YgyOnTJvP6G5g6vAuOKQHa8FYhYF1mVmrzLV2TcSE+hNHrp3otyNLrmMwVbNIsiSHdDnk3U6qZm6vt1mbvsq0HbfeGgZ9iAZSlYoJGXfPLCN6Wz7UK7cUk7r3LfkOibw33dh6trdLvx2VTkNvewFzY0fqe/51a3PHSunBJzygCzIDHjGL1GOTyXUStF2+Tf3hWTEN8TjCdRqbEhmHp8ZuvOyX543hb88wNJRkKveMm+6QHyo4F19aJGr54S7NSXIC4lvSCYimnQPqEqSxLw6rA2LhjUwT4eRWnJVK 51JqHOE3 +Bs2KUbKsltzEELXpJFiR5+MFe4LoR2KU2tzBHc9MBvOz8U5OSIlMuwDHw6vHOfzEmL2tKivRq8FJnAOxTj6baMEWvT7sQmyIyVgbd4BwxqwkyHOiyRLSqW/Sn8YEAHAuyiz4S+L/xm8NxD2Ig0c9ImuJoEYrWTMZaIlVi7sA7MNUotse06GQ5pHbWtjouU1crYTBA+eT76bzy7iME77qKwlQyMJFmbHOXrccER/mjukFwL7NibEVzI/CNOABLSNPMDYALiDmWh6Km03u4wQiwRILRRVOWDGRHzfSEQ1BUvrAJ2md1x3obDTDZMFalC5aT5McL2mA35IjI3PJVjOX++5BsK0DA0cCBWG8/VpRezpXursQh33PDRG726cuqU3RpxLFNy7rhkqXAQV+y3g0MAptFV5ol6vLPY2zQUZbBcEliIKqni46eHaTUfj3R97pqTh+GOhr1FHQdnIs05+HCdzHsOvQ83DVhYKn9Eul/s/izShcoPe6mESaIiHEPpoYlW+WvSkCUCT7M21pc1mlEaf8Z2Rrou4a9uH7RPrCXKb4GrMrxQlQ7WoovTDONPk5xaROTPHNHjemKCCBMvmQ3EYBmVsr+i49zUp71sKRHogfGBVTEDAbBevCujMYciAvv6dE4m52feOfaRWMuDqGB9NLk5Y/PnqCKrvz5k63ncf5RCkuFUCd2OTG0g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: handle_mm_fault returning VM_FAULT_RETRY or VM_FAULT_COMPLETED means mmap_lock has been released. However with per-VMA locks behavior is different and the caller should still release it. To make the rules consistent for the caller, drop the per-VMA lock when returning VM_FAULT_RETRY or VM_FAULT_COMPLETED. Currently the only path returning VM_FAULT_RETRY under per-VMA locks is do_swap_page and no path returns VM_FAULT_COMPLETED for now. Signed-off-by: Suren Baghdasaryan Acked-by: Peter Xu --- arch/arm64/mm/fault.c | 3 ++- arch/powerpc/mm/fault.c | 3 ++- arch/s390/mm/fault.c | 3 ++- arch/x86/mm/fault.c | 3 ++- mm/memory.c | 1 + 5 files changed, 9 insertions(+), 4 deletions(-) diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index c85b6d70b222..9c06c53a9ff3 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -612,7 +612,8 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, goto lock_mmap; } fault = handle_mm_fault(vma, addr, mm_flags | FAULT_FLAG_VMA_LOCK, regs); - vma_end_read(vma); + if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) + vma_end_read(vma); if (!(fault & VM_FAULT_RETRY)) { count_vm_vma_lock_event(VMA_LOCK_SUCCESS); diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index 531177a4ee08..4697c5dca31c 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -494,7 +494,8 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address, } fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs); - vma_end_read(vma); + if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) + vma_end_read(vma); if (!(fault & VM_FAULT_RETRY)) { count_vm_vma_lock_event(VMA_LOCK_SUCCESS); diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c index b65144c392b0..cccefe41038b 100644 --- a/arch/s390/mm/fault.c +++ b/arch/s390/mm/fault.c @@ -418,7 +418,8 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) goto lock_mmap; } fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs); - vma_end_read(vma); + if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) + vma_end_read(vma); if (!(fault & VM_FAULT_RETRY)) { count_vm_vma_lock_event(VMA_LOCK_SUCCESS); goto out; diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index e4399983c50c..d69c85c1c04e 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -1347,7 +1347,8 @@ void do_user_addr_fault(struct pt_regs *regs, goto lock_mmap; } fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs); - vma_end_read(vma); + if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) + vma_end_read(vma); if (!(fault & VM_FAULT_RETRY)) { count_vm_vma_lock_event(VMA_LOCK_SUCCESS); diff --git a/mm/memory.c b/mm/memory.c index f69fbc251198..f14d45957b83 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3713,6 +3713,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (vmf->flags & FAULT_FLAG_VMA_LOCK) { ret = VM_FAULT_RETRY; + vma_end_read(vma); goto out; } From patchwork Wed Jun 28 17:25:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13296087 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5527DEB64DC for ; Wed, 28 Jun 2023 17:25:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CA8118D0007; Wed, 28 Jun 2023 13:25:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BE5728D0001; Wed, 28 Jun 2023 13:25:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8FD148D0007; Wed, 28 Jun 2023 13:25:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 80B148D0001 for ; Wed, 28 Jun 2023 13:25:44 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 498F6B0380 for ; Wed, 28 Jun 2023 17:25:44 +0000 (UTC) X-FDA: 80952833808.15.69CEB58 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf18.hostedemail.com (Postfix) with ESMTP id 80B501C0009 for ; Wed, 28 Jun 2023 17:25:42 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=3gplJzXJ; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf18.hostedemail.com: domain of 3FW2cZAYKCIw8A7u3rw44w1u.s421y3AD-220Bqs0.47w@flex--surenb.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3FW2cZAYKCIw8A7u3rw44w1u.s421y3AD-220Bqs0.47w@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687973142; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Xs/kXC7xcHrq9c07EUAAvOT83+rro6HC4rZ5AGfMFCA=; b=kPXv5Jc1HkkV0hcndAT/IloM46WF9jQG6qDF75sd+CWC7sDN4zUAxwTjg+EvsQPJY61M8C ePg8Oyih+EH0b2cv2W+EJhulpOUUc2OVhq8BpqaUesdTw05Tj2UCxkwOHaWwlq1XOv6JR/ K3n4iaAbSpXguDITMCnd9VLUDnWEK4w= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=3gplJzXJ; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf18.hostedemail.com: domain of 3FW2cZAYKCIw8A7u3rw44w1u.s421y3AD-220Bqs0.47w@flex--surenb.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3FW2cZAYKCIw8A7u3rw44w1u.s421y3AD-220Bqs0.47w@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687973142; a=rsa-sha256; cv=none; b=wztesRG7A7DYbeiCRG1zt2CUjFdEOfKMcJnISkTITurukhjrNJoCvfbA/FM4ktwQkenl+j +VCR20uE2gI5wzwo28k3b3HnolWxTBG6pwSrZIDTv3gCwcFqzMUMvu8kpsLXTx0QDNfl7J CRe8aIti4jrEv58ZnDkK8EG0bhL2RSw= Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-573d70da2dcso624447b3.1 for ; Wed, 28 Jun 2023 10:25:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1687973141; x=1690565141; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Xs/kXC7xcHrq9c07EUAAvOT83+rro6HC4rZ5AGfMFCA=; b=3gplJzXJhBY6iff0MITaQg7qq1fbPjAqgCxVk48TLjKNSRk2ZVcYwl9B6ys4XThx6f 4IdWWG0uKl6jF+dmFqO4p08c7kHuPcS9TL5P/uGpBMbVMyrYimn/Of87aoPfQAiqOVfS QMswoUkYhkicP3xt1gz73zQ8oeo8QqEbKdXFe4gLWWjmwICm2jDUAYYFk1yzdYFSabjA 4PyPp8TADBs4TtABQ6fFSIDwNgJWO/Lg16mBxHi80pof6pmgch5a6OE6+/Deo/nQKs6J VkyihG41HS4jHC6qBw0I8NjkgWHola51sBbUKa06EMV+qVzS56YMvtCbbg1yWN2hssRp kErw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687973141; x=1690565141; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Xs/kXC7xcHrq9c07EUAAvOT83+rro6HC4rZ5AGfMFCA=; b=TG5CI+Hrd2tG0Mj5H960EwQ1Yf/nVP9PBbXKfF53Cn7/wWJuHEkbSzFfpA4ckqmBRn 2ePAFNELXcj7zzAh28usqrQMR3eTiYGL1rDVczeVJmpgSi6lxsXnX3J22QP48u8SuYfo r8n0i7Pni9Eg5bFl6Ujwzx/BfSZan8GX/r1vdxStoMqilvBPH+MnocE1bO8ptc+/FGqr q4no1BfikGSgtSBeum+ub9AXrSmEFPmy99+3dagVCeitCCM99Q488N4xgA3oDfLuXHvy MsGnCNH3deN8Iz62UVkYcJVrU2YdMTksSs4SYcTs/jA0Zy+dBku7hgk1aYmJKzuaxvJw BtKQ== X-Gm-Message-State: AC+VfDxumhMnHt+vsZNY9zCa9CrulwUSfULyKxS8EcEcFjfHoxDU6/TA P2ZZLgwzVGnIbRmV1GHUEnkyJ0f/0j4= X-Google-Smtp-Source: ACHHUZ4DmmL1KbXoJUcLTcsqVboHZrBXg1e+ydt0kho9DvtJokDrg9K4gqv8lq1COe4UurCVTdxS896v+mk= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:eea3:e898:7d7a:1125]) (user=surenb job=sendgmr) by 2002:a81:e247:0:b0:56d:502:9eb0 with SMTP id z7-20020a81e247000000b0056d05029eb0mr13999020ywl.6.1687973141513; Wed, 28 Jun 2023 10:25:41 -0700 (PDT) Date: Wed, 28 Jun 2023 10:25:27 -0700 In-Reply-To: <20230628172529.744839-1-surenb@google.com> Mime-Version: 1.0 References: <20230628172529.744839-1-surenb@google.com> X-Mailer: git-send-email 2.41.0.162.gfafddb0af9-goog Message-ID: <20230628172529.744839-5-surenb@google.com> Subject: [PATCH v5 4/6] mm: change folio_lock_or_retry to use vm_fault directly From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, peterx@redhat.com, ying.huang@intel.com, david@redhat.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, pasha.tatashin@soleen.com, surenb@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com X-Rspamd-Queue-Id: 80B501C0009 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: 77bp6mbrc4nyqz3wdyj9b5p9b4s9txdh X-HE-Tag: 1687973142-794806 X-HE-Meta: U2FsdGVkX19rKV0sIbCY8p1ovqdQ0NHspzdn/LTcL8GSi9tlRLAuUKpCmDKRtRTep4rtHoAqr9wHA6meWfLGjDXUvttwVLVGLato+L0gcq0axZDU6gzHyMSi6e6vSlSJjpq3jjbPi5xOKTZlaSCQlnyPBM1Z9SnKoWrcQ5s18mimNdUJdgh/UYQ5jL04Oy8ziSJJ3Wr/BdNWPYYmFkQl2NRY2vml9LCjINgg5p9IqQuG8aBctoqx46R7fxiqSqUWiMeI4VThvrklCOdySTWsXxNr9fUJavJTZTHmZXv8oBHUEb9nbcT9ap7IejeOaiMrGCbG109PhNwH7CDWqIXlWWmRlkiDz/zBroM+/M+pkujMJFmDP0WYDs7F4s1wtPaVO/R7z4KnSp/OgSw8ajH+fb+tDO+344NUpHL2H76DNj34dZw5JMpdf207g2fsq34i7jUF8ypK1kI3N3kibWo6zoNZhwTZsdLAbrn0WtxlwSJSJ7jPSQ0dtV6dsyor4rGZX1B+INj6RQZJrjGRfEQRTXx0CQ+a92j/+FbsC1TKBmVVnXfaStIbVOlKycmloIE8R+5cltWMViMCdlOroc1Yxq/KeJ5Zd8ezrpLlIutu/vd94CvzeK+a9rxijaLTFpvLwmITe1cWShBovJV2NELLo9cAHgH4iUxnyltCNZATMH8U7obFU2tHg1hcYj4k3RpSppk4kvDlye67ziKrLH72jcHOqpN+w9UR6b4h0UeKg22K+Yv3RpgVbCkLLlYMsPEZLY1e2bDEIUpWxCd4VYJ5atE3WVR31MAyafQbp4vhp33meOMYCaDkiNLq9o68k11ch1gMpvkYNZRNQcLyHsP/ZpCUuCoNqO3huHhmrdDMpfZB/C4niF6Io0dqyDRnVn4o1ZkreZxG0hxW+dX9F9bS6qnKyWoQMT21sKA4S7HKwUeRY/bxdhtQ1EbBXfV5cCdr044kJ5OpqX4+7uQZhZ/ +HzbDwBH ou54mjPOUX9sHPtZKstbU4fUtr9o9OvdBr3sYvXKJIoR0Rs31z84n/0EgD2Ye54Mm9hUdl2Uj7sgO0g4gEFWu9ND0A8mk/qDWROavQNm8by3VHAFzOyRj1Iv/htGsYe5rqsx0fhAGQSs5sjGxnEXbP+vuRVVx9bPzQZARVy8diejez490xF0tZT4/6xK1qUgCHSq7Bujsm1XFpoqxDpX0XHZt+yif+aVWr6aSgj1EUzpoig9n7g33hvFw7dTVoTrk7DeBWM9Hp1+CnMoCqzPV85JQ8AvQEVoMzMofL0fe/YyQaUtKGdC1GPDT9R+HKBOahOlJkHmqwQ8/fhDj3GlLSbm04w1eMuXtJgBSbi1RRUGzjaNXGaB4tniMYkDZsXi7/jNRWdSMTZDMQm8RjLe4LehIA4OdDA6B3NAWiyAGiwHSX8LHuYswZoczMwdMA9F2SyPwPKAR3uvitola6FZe2D0s1mZ//TgrFkRHEEN5lv65XjcMCuEW38QYbxIG4BFikL0sS0Leh0KH6MZoqffX8fw8vxwT8V4RHJZuyYhc5P05xK9z4SQLM9x6d2CxwulFdjdCDV8ixStYNxSHFA1AktUkEBllWIZwX9NiaKj5SdsMnPtAS4ARhY3mbIQGmNhfSHChtqVya1Ry57IcSzVgTpWgZwGYqMqAE5bJlurIS2PnPm1XQkn1a5S/9i8HIoj1t6s8t+WbNwjPwnZUs8urGnUerj0lrbQbiUlKh7JHcgL7Z1Y= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Change folio_lock_or_retry to accept vm_fault struct and return the vm_fault_t directly. Suggested-by: Matthew Wilcox Signed-off-by: Suren Baghdasaryan Acked-by: Peter Xu --- include/linux/pagemap.h | 9 ++++----- mm/filemap.c | 22 ++++++++++++---------- mm/memory.c | 14 ++++++-------- 3 files changed, 22 insertions(+), 23 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index a56308a9d1a4..59d070c55c97 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -896,8 +896,7 @@ static inline bool wake_page_match(struct wait_page_queue *wait_page, void __folio_lock(struct folio *folio); int __folio_lock_killable(struct folio *folio); -bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm, - unsigned int flags); +vm_fault_t __folio_lock_or_retry(struct folio *folio, struct vm_fault *vmf); void unlock_page(struct page *page); void folio_unlock(struct folio *folio); @@ -1001,11 +1000,11 @@ static inline int folio_lock_killable(struct folio *folio) * Return value and mmap_lock implications depend on flags; see * __folio_lock_or_retry(). */ -static inline bool folio_lock_or_retry(struct folio *folio, - struct mm_struct *mm, unsigned int flags) +static inline vm_fault_t folio_lock_or_retry(struct folio *folio, + struct vm_fault *vmf) { might_sleep(); - return folio_trylock(folio) || __folio_lock_or_retry(folio, mm, flags); + return folio_trylock(folio) ? 0 : __folio_lock_or_retry(folio, vmf); } /* diff --git a/mm/filemap.c b/mm/filemap.c index 00f01d8ead47..52bcf12dcdbf 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1701,32 +1701,34 @@ static int __folio_lock_async(struct folio *folio, struct wait_page_queue *wait) /* * Return values: - * true - folio is locked; mmap_lock is still held. - * false - folio is not locked. + * 0 - folio is locked. + * VM_FAULT_RETRY - folio is not locked. * mmap_lock has been released (mmap_read_unlock(), unless flags had both * FAULT_FLAG_ALLOW_RETRY and FAULT_FLAG_RETRY_NOWAIT set, in * which case mmap_lock is still held. * - * If neither ALLOW_RETRY nor KILLABLE are set, will always return true + * If neither ALLOW_RETRY nor KILLABLE are set, will always return 0 * with the folio locked and the mmap_lock unperturbed. */ -bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm, - unsigned int flags) +vm_fault_t __folio_lock_or_retry(struct folio *folio, struct vm_fault *vmf) { + struct mm_struct *mm = vmf->vma->vm_mm; + unsigned int flags = vmf->flags; + if (fault_flag_allow_retry_first(flags)) { /* * CAUTION! In this case, mmap_lock is not released - * even though return 0. + * even though return VM_FAULT_RETRY. */ if (flags & FAULT_FLAG_RETRY_NOWAIT) - return false; + return VM_FAULT_RETRY; mmap_read_unlock(mm); if (flags & FAULT_FLAG_KILLABLE) folio_wait_locked_killable(folio); else folio_wait_locked(folio); - return false; + return VM_FAULT_RETRY; } if (flags & FAULT_FLAG_KILLABLE) { bool ret; @@ -1734,13 +1736,13 @@ bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm, ret = __folio_lock_killable(folio); if (ret) { mmap_read_unlock(mm); - return false; + return VM_FAULT_RETRY; } } else { __folio_lock(folio); } - return true; + return 0; } /** diff --git a/mm/memory.c b/mm/memory.c index f14d45957b83..345080052003 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3568,6 +3568,7 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf) struct folio *folio = page_folio(vmf->page); struct vm_area_struct *vma = vmf->vma; struct mmu_notifier_range range; + vm_fault_t ret; /* * We need a reference to lock the folio because we don't hold @@ -3580,9 +3581,10 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf) if (!folio_try_get(folio)) return 0; - if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags)) { + ret = folio_lock_or_retry(folio, vmf); + if (ret) { folio_put(folio); - return VM_FAULT_RETRY; + return ret; } mmu_notifier_range_init_owner(&range, MMU_NOTIFY_EXCLUSIVE, 0, vma->vm_mm, vmf->address & PAGE_MASK, @@ -3704,7 +3706,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) bool exclusive = false; swp_entry_t entry; pte_t pte; - int locked; vm_fault_t ret = 0; void *shadow = NULL; @@ -3826,12 +3827,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) goto out_release; } - locked = folio_lock_or_retry(folio, vma->vm_mm, vmf->flags); - - if (!locked) { - ret |= VM_FAULT_RETRY; + ret |= folio_lock_or_retry(folio, vmf); + if (ret & VM_FAULT_RETRY) goto out_release; - } if (swapcache) { /* From patchwork Wed Jun 28 17:25:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13296088 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6A70EB64DA for ; Wed, 28 Jun 2023 17:25:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7020B8E0001; Wed, 28 Jun 2023 13:25:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 615488D0001; Wed, 28 Jun 2023 13:25:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3F4CC8E0001; Wed, 28 Jun 2023 13:25:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 210F58D0001 for ; Wed, 28 Jun 2023 13:25:47 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 0377F1602B7 for ; Wed, 28 Jun 2023 17:25:46 +0000 (UTC) X-FDA: 80952833934.20.074B020 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf13.hostedemail.com (Postfix) with ESMTP id F1FF12001E for ; Wed, 28 Jun 2023 17:25:44 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=MhkAySfk; spf=pass (imf13.hostedemail.com: domain of 3F22cZAYKCI4AC9w5ty66y3w.u64305CF-442Dsu2.69y@flex--surenb.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3F22cZAYKCI4AC9w5ty66y3w.u64305CF-442Dsu2.69y@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687973145; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=oD5hhI9l4Jf87DrZhgC9LlLYP4fwxMSCMPLIWpUYiKs=; b=MKy1RTV9kp28a2DDHy4S21Ay/HhhqiwxBpq9srV66gInPreK6KyKbjLxXm4GS+kPxpi2ZK 2KndMvt295SsSZgTvPIpBKgdxvPm+nSYLnHxfMN0IWARfpTUxLbEx5Ioi4UijYg/UA60w+ d5SeJ1RIw+tjS2ZDbKTt33o6Ga2ozUg= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687973145; a=rsa-sha256; cv=none; b=EUmDTZIUZ9pdiGLlzFsCOKtAaQlQwhQ1y3IRII9aYZarGfqHXmZLbeO+F8kH/nL52z9Ish lmbaKSqU3K/zvBgGfO+BDPLSzT6rtYUQ7lLpNK5YkgYyoE8xumEKDA75fAdiCTh2xndTcG M43aUmauEH25znusD+hcu1ZXidEp/Gc= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=MhkAySfk; spf=pass (imf13.hostedemail.com: domain of 3F22cZAYKCI4AC9w5ty66y3w.u64305CF-442Dsu2.69y@flex--surenb.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3F22cZAYKCI4AC9w5ty66y3w.u64305CF-442Dsu2.69y@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-56ffa565092so482327b3.2 for ; Wed, 28 Jun 2023 10:25:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1687973144; x=1690565144; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=oD5hhI9l4Jf87DrZhgC9LlLYP4fwxMSCMPLIWpUYiKs=; b=MhkAySfkftL2P4BiqUnPcVvmpQvk/auH3LM7Z+ueqGc1NfLH7/HLo0jTbZXA1VBccI MPcn3IPYjRbtN8+PS8/WHGstetjsvYpAjB7wiPzwBwA8WrUeLSwMo3QonXvzVRty79ZB fRpfwoTvHml/EijbQiE610yUvpg+YKk8ds2myvs7Iaz02mVp+//6pTH44iU75MwgifSq GqZmgUdybSD1Qfo62qK8Rt2zHT3/1u/rMj1Tj+s5rzDAYW+Y65erL0beG/ik/HH+W9b0 FfPThQsgjlORcaRtantGIZZAxCbaworyNXS0Ixo143Gu7hHDPB0CcHfBEhoO6N2MxhF+ xccg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687973144; x=1690565144; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=oD5hhI9l4Jf87DrZhgC9LlLYP4fwxMSCMPLIWpUYiKs=; b=IgLt8GXbZ7c91Og6wEZZAHBj4LbI9GiCDbziG3QcDLunKT9LC8MyHjvORJpYJ3OVj9 aHu1EJcdzMP8h322kGRIzSeuE5okV2dy3rl0KmAEALS9RxXJuO9p4Gm/+jwY0MZyRPb5 0UHKtlxPU5G5vhtvr2WHQIoHn1L+DNC6mpZPEyd784Qq1ZyY8NCAnQEbZwgEGjii9pH0 F9q0Q8pT5OU1Hm2kiGzaRHS/WQ/gCEM+Vju9O3QYqKGMnfE9EpsZFBcAKnMjPACZv5pK dQg2VfmYcBv2fCZnbpoaAzylKSehe2tBlf5+v0WeysJt9RSAO0AKRp13htL8/XaMuDHu aoHg== X-Gm-Message-State: AC+VfDwqy9rVqQM6pfM7T5ixNHmrAL634WLC/3rhorIoUhXOKOyChuzN L91KFf0uyAFo368QgZGKLNYc+U5UdF4= X-Google-Smtp-Source: ACHHUZ4No9DOKnC41jadwjTwUmerxqhcl1cZj6cyLHnTgWAWhGX3rU1mn98pnnUeCN9J0h+uWTD8lD/fZdg= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:eea3:e898:7d7a:1125]) (user=surenb job=sendgmr) by 2002:a81:af0c:0:b0:55d:6af3:1e2c with SMTP id n12-20020a81af0c000000b0055d6af31e2cmr13786933ywh.3.1687973143932; Wed, 28 Jun 2023 10:25:43 -0700 (PDT) Date: Wed, 28 Jun 2023 10:25:28 -0700 In-Reply-To: <20230628172529.744839-1-surenb@google.com> Mime-Version: 1.0 References: <20230628172529.744839-1-surenb@google.com> X-Mailer: git-send-email 2.41.0.162.gfafddb0af9-goog Message-ID: <20230628172529.744839-6-surenb@google.com> Subject: [PATCH v5 5/6] mm: handle swap page faults under per-VMA lock From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, peterx@redhat.com, ying.huang@intel.com, david@redhat.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, pasha.tatashin@soleen.com, surenb@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com X-Rspamd-Queue-Id: F1FF12001E X-Rspam-User: X-Stat-Signature: b4m1cz74aq5gg4jo98xxxbbuz4m63e6j X-Rspamd-Server: rspam03 X-HE-Tag: 1687973144-575250 X-HE-Meta: U2FsdGVkX1/YY7TnJ7Yw6VAZr2PqD1xEwdm+6eVbgHIi9IfuBYMS+9u32vjQQX20KvkH/P6BSdod9FQc9ctfJZNewSx1Sgj2CG4OqcIHpGFjwW7v/mZqrg+jgcFlLDLAR1PkHPqPmRt6p60EPj/o7Afo5fVdsHiug6bP575J6OxJgkSwRYOwXC+k/XmAnnOahI+SIbjRAltTNd803l6Yceq6WgLYsuTYRgQBukiYqMOnz3mNkxr8gmyLL7AFNmaBsi3eWJ6yDIt6xR0hv4kIfoB0Cp85OWY+R3Ep+UKKj/EAQDg/ztsR0Sr7upPR4qiCuvOfhcHaxJ6eyNQGjHvS2EFZjCW/9u/b+FzHV4n5edBPV1IhF3VykzyXXLRV0oyZ93KspS/3/m5IvEhX4aSqDoj97vBiqo1Fpaww+9Zss7yXiu12EgXNhG7lYb4DiLy2M9+Ty1R32BXH2IADtqmELs/utf5D1cDlD5fITLGbVnbMpRrUcSjQcTcZ5brNRbw8Q5WOPQTM4/kx7315HS6yFvhywz0VakyQH+gnY5NzjqSbD/iFUD2bXQuG4aNxV0vC90YnEHNT12PDGVAmfLnaDOoZDmlWnIMTKLh1xfyZRbOXp57nnVzp3nwi+3U0WBmwpd1etB2NG7EpOJ7vM10F/IdeMLaW+E0oApa67lsJ4X5YhSuEAn4KDtWPb7/1JgR0TiNpgEE22ppd+FIesW4kxOwaqlfh1mtMjY8ubiI05/EmT9zSxYXd2mUgSzfBOF2VRVfqZu0innGNHDwyqQ2xCksPEtpkX16qKS+Jlrktl2GBmfTn4DJ7eJhIF0zUCXqf3E5JOZxXhbD0LbOKOjC/0dVw/pWdbLD8ACmnyb9UKbFcgtxV7sCqoBLNvSuXHElG31WDGfZlaHNen8qgBY4x/7BO96Tl+g6vuzqgGgjP1kJ+4icJwcs4RddhuOtB/MsAvOSKfpxPyFpJiC+GPOQ E3/to+Mt QTUKXVbqCRd0am2+QFwbCArVJUafpd+sPdjJa8GUZHzhMDJ9l0mCVi6HnRAKuwDEOLgr9YChLpU5iiPW4IzUVhyy8XJoEjpL1s/2/zmbYxjXkM4H+yPobLn1aI/2+YBC+w2QNhO9ke2Ok9FtkgWVwPx9RukVBAhWJ4vL97kt79vffICrYvDR9wOzaA8nJez7qBclki2BtPKszvmzmvAsa40rEf6y3RwSqAq107H+VxeF7q97dZalsmwE6ln8Vuyhnrj8HagevPrccMJxZ0b8im0a90BQyZ4ayFngmBN/QfaGvm8lboeFYRT6HmNmGtPjVCkAiC5m3L0lXLje5Vs5C0WT1WxjG2HGhTiwxCi8yiiWR4PpLVX6Gwx5mgrzs5VHO1zDa9ElbpCm5KfpBrBCe1edi96EGkZFBu1+u0d1mE2GitWFgF4zmNy6InOTEPnLnm9U4SeppJ3bivBPUiugi1MV1f0NGhNohYoBleOT0HAnAG+PQxclRlFeLpigpxM6699Owxzw/DUvWOZfuwPASEsSNO89ed6l+gjAMAhnv0ifkTMCeGh+/FolLlfJWOpna2YWe0o4d1KZnD8qHb/3nRw+4mc9CPyMkFzUpqK6SzdYZW3zkqmyQzC0QL7mqx/96g5ct X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When page fault is handled under per-VMA lock protection, all swap page faults are retried with mmap_lock because folio_lock_or_retry has to drop and reacquire mmap_lock if folio could not be immediately locked. Follow the same pattern as mmap_lock to drop per-VMA lock when waiting for folio and retrying once folio is available. With this obstacle removed, enable do_swap_page to operate under per-VMA lock protection. Drivers implementing ops->migrate_to_ram might still rely on mmap_lock, therefore we have to fall back to mmap_lock in that particular case. Note that the only time do_swap_page calls synchronous swap_readpage is when SWP_SYNCHRONOUS_IO is set, which is only set for QUEUE_FLAG_SYNCHRONOUS devices: brd, zram and nvdimms (both btt and pmem). Therefore we don't sleep in this path, and there's no need to drop the mmap or per-VMA lock. Signed-off-by: Suren Baghdasaryan Acked-by: Peter Xu Tested-by: Alistair Popple Reviewed-by: Alistair Popple --- include/linux/mm.h | 13 +++++++++++++ mm/filemap.c | 17 ++++++++--------- mm/memory.c | 16 ++++++++++------ 3 files changed, 31 insertions(+), 15 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index fec149585985..bbaec479bf98 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -723,6 +723,14 @@ static inline void vma_mark_detached(struct vm_area_struct *vma, bool detached) struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, unsigned long address); +static inline void release_fault_lock(struct vm_fault *vmf) +{ + if (vmf->flags & FAULT_FLAG_VMA_LOCK) + vma_end_read(vmf->vma); + else + mmap_read_unlock(vmf->vma->vm_mm); +} + #else /* CONFIG_PER_VMA_LOCK */ static inline void vma_init_lock(struct vm_area_struct *vma) {} @@ -736,6 +744,11 @@ static inline void vma_assert_write_locked(struct vm_area_struct *vma) {} static inline void vma_mark_detached(struct vm_area_struct *vma, bool detached) {} +static inline void release_fault_lock(struct vm_fault *vmf) +{ + mmap_read_unlock(vmf->vma->vm_mm); +} + #endif /* CONFIG_PER_VMA_LOCK */ /* diff --git a/mm/filemap.c b/mm/filemap.c index 52bcf12dcdbf..d4d8f474e0c5 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1703,27 +1703,26 @@ static int __folio_lock_async(struct folio *folio, struct wait_page_queue *wait) * Return values: * 0 - folio is locked. * VM_FAULT_RETRY - folio is not locked. - * mmap_lock has been released (mmap_read_unlock(), unless flags had both - * FAULT_FLAG_ALLOW_RETRY and FAULT_FLAG_RETRY_NOWAIT set, in - * which case mmap_lock is still held. + * mmap_lock or per-VMA lock has been released (mmap_read_unlock() or + * vma_end_read()), unless flags had both FAULT_FLAG_ALLOW_RETRY and + * FAULT_FLAG_RETRY_NOWAIT set, in which case the lock is still held. * * If neither ALLOW_RETRY nor KILLABLE are set, will always return 0 - * with the folio locked and the mmap_lock unperturbed. + * with the folio locked and the mmap_lock/per-VMA lock is left unperturbed. */ vm_fault_t __folio_lock_or_retry(struct folio *folio, struct vm_fault *vmf) { - struct mm_struct *mm = vmf->vma->vm_mm; unsigned int flags = vmf->flags; if (fault_flag_allow_retry_first(flags)) { /* - * CAUTION! In this case, mmap_lock is not released - * even though return VM_FAULT_RETRY. + * CAUTION! In this case, mmap_lock/per-VMA lock is not + * released even though returning VM_FAULT_RETRY. */ if (flags & FAULT_FLAG_RETRY_NOWAIT) return VM_FAULT_RETRY; - mmap_read_unlock(mm); + release_fault_lock(vmf); if (flags & FAULT_FLAG_KILLABLE) folio_wait_locked_killable(folio); else @@ -1735,7 +1734,7 @@ vm_fault_t __folio_lock_or_retry(struct folio *folio, struct vm_fault *vmf) ret = __folio_lock_killable(folio); if (ret) { - mmap_read_unlock(mm); + release_fault_lock(vmf); return VM_FAULT_RETRY; } } else { diff --git a/mm/memory.c b/mm/memory.c index 345080052003..4fb8ecfc6d13 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3712,12 +3712,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (!pte_unmap_same(vmf)) goto out; - if (vmf->flags & FAULT_FLAG_VMA_LOCK) { - ret = VM_FAULT_RETRY; - vma_end_read(vma); - goto out; - } - entry = pte_to_swp_entry(vmf->orig_pte); if (unlikely(non_swap_entry(entry))) { if (is_migration_entry(entry)) { @@ -3727,6 +3721,16 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) vmf->page = pfn_swap_entry_to_page(entry); ret = remove_device_exclusive_entry(vmf); } else if (is_device_private_entry(entry)) { + if (vmf->flags & FAULT_FLAG_VMA_LOCK) { + /* + * migrate_to_ram is not yet ready to operate + * under VMA lock. + */ + vma_end_read(vma); + ret = VM_FAULT_RETRY; + goto out; + } + vmf->page = pfn_swap_entry_to_page(entry); vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); From patchwork Wed Jun 28 17:25:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13296089 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4ED7AEB64D7 for ; Wed, 28 Jun 2023 17:25:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9C1828D0001; Wed, 28 Jun 2023 13:25:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 971DD8E0002; Wed, 28 Jun 2023 13:25:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 751468D0008; Wed, 28 Jun 2023 13:25:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 63D748D0001 for ; Wed, 28 Jun 2023 13:25:49 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 19DC31602B7 for ; Wed, 28 Jun 2023 17:25:49 +0000 (UTC) X-FDA: 80952834018.22.6A2A000 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf29.hostedemail.com (Postfix) with ESMTP id 3EDF012000D for ; Wed, 28 Jun 2023 17:25:47 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=cv9vKOTj; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf29.hostedemail.com: domain of 3Gm2cZAYKCJEDFCz8w19916z.x97638FI-775Gvx5.9C1@flex--surenb.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3Gm2cZAYKCJEDFCz8w19916z.x97638FI-775Gvx5.9C1@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687973147; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=OlfhR7EToLpThGeQT880X4R1DISXmAyjsxAIr0E3t3A=; b=fRzRwH52LYRMmEz6MwpmGaid5HNFDImXnCgQtMjy3naDF23kXCKOvz3iYPNlIdDsDgQq4J JnBd5Mm4ySoGBEAwLmGQ9oJwPb06GKzxBg8aFyCSMu4moVcZqVpMPQFJK+5i3nIJSVrbYB nXhQcU5ZGkH3QxfQYfzlGIM6F3PEu/Q= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=cv9vKOTj; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf29.hostedemail.com: domain of 3Gm2cZAYKCJEDFCz8w19916z.x97638FI-775Gvx5.9C1@flex--surenb.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3Gm2cZAYKCJEDFCz8w19916z.x97638FI-775Gvx5.9C1@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687973147; a=rsa-sha256; cv=none; b=0LCjfqsDv2aqayqR/akGyNOLYBawa9NATo1SucxciJBFz04GIRotbh5oLo9HwStErDeFfI d91iOIp9yoWTJVilnCHqFTQzspPZ5YyqNX8ICefTBsUD7CpMlJDkBnSjpBiC2963AUpcHV d4ghyD9h52Ab9m0My0C1cEtTC/qmVCE= Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-be47a3b3a01so7364578276.2 for ; Wed, 28 Jun 2023 10:25:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1687973146; x=1690565146; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=OlfhR7EToLpThGeQT880X4R1DISXmAyjsxAIr0E3t3A=; b=cv9vKOTj1uWilOQw2ueM+a7smhsXMKEOe4ULfV9/3oHnfMC+VyWkg0YagZLfag3QHt VTJVW68wDX5A6qNCjY1/v0NLpgB8pHfHimYDj35/V7zXaqiOra1xeA1GkBbA67NjquPW x1QSTu4uNaPPEroE9IurX9RKH4SkOXwFavLPUCF6hvBPJKloBSUU4o+6g7KIM6M/Kdrl Ee89eRG8kSHNaxe/bYk8hYg9yrrFR5dU/fxBHCJQ0CcEb54uf2JQBXMie2rfUmkqeUKw huYvjvg5vTodCn7//hTPnzWx41V+3ZeyKFUf8CEnL3AFwmjDEeS4lOCQ4Q4hhBG07yIU YDlQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687973146; x=1690565146; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=OlfhR7EToLpThGeQT880X4R1DISXmAyjsxAIr0E3t3A=; b=WYu5U4HZtFv/pnasJeWOA2/Up9l1GLYPDFalIUdr5bzRUxFYeiJzLkdg33eLUor7WF GGjUCsaSLwMWzq8P94NKLL7THJgD0kh5BSJGmYYJJ6kebSOt8E0WVXwWYpyXaMYFTSof zt0+wqTyPqeeYT4cyMLJcTmIpyt/vbSaBxLITqOwd2aiyYTzO0oENPla25IQKcLq4a90 0SdFjqGhw3ysop3ln3BBRhqET6hb1lCPyMJoM4pZPzlSniKc3dFUImaThCwh9AX+jihH FFt+IK46nwzWkTCiAeMksmLUY3FA76yvOVMAfcYs9NULRSqDBSpUkl7qDs/pmA1XkL4y N5+A== X-Gm-Message-State: AC+VfDxbHYT8FpD7OfwbVWUQfNNMyutuL2+ZKzKz5GC+Mp+tRnaHdRd1 r5t2R7aEsJCu0wGTzeXeLXL37GFJ2h8= X-Google-Smtp-Source: ACHHUZ6mVzUq9ryDcwh2Xte1sV3YO3u9EdCX8f+cPy2W6N1hipmOz/QDUBjgoPGi9sK1StwJ6zYhxS8iBnQ= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:eea3:e898:7d7a:1125]) (user=surenb job=sendgmr) by 2002:a25:5091:0:b0:c1a:dc07:1d74 with SMTP id e139-20020a255091000000b00c1adc071d74mr3744020ybb.0.1687973146287; Wed, 28 Jun 2023 10:25:46 -0700 (PDT) Date: Wed, 28 Jun 2023 10:25:29 -0700 In-Reply-To: <20230628172529.744839-1-surenb@google.com> Mime-Version: 1.0 References: <20230628172529.744839-1-surenb@google.com> X-Mailer: git-send-email 2.41.0.162.gfafddb0af9-goog Message-ID: <20230628172529.744839-7-surenb@google.com> Subject: [PATCH v5 6/6] mm: handle userfaults under VMA lock From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, peterx@redhat.com, ying.huang@intel.com, david@redhat.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, pasha.tatashin@soleen.com, surenb@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 3EDF012000D X-Stat-Signature: zuy6o3kf437zy8o7d51m3zcqq4yegyhz X-HE-Tag: 1687973147-253244 X-HE-Meta: U2FsdGVkX1+RqQrBaOnuDyg0bj4KB9xFQFS8tOYiTP4GTyZ5iYo4YeXz9A1vQRvA30ZV0yJLgj5TJQubT00IEydtn+vA/ujOT72o5TnWsKoo8WSu/LZTkzVNmaJKfGcWlhYmsMyqN8siZp3pMlYnbOAwOmDL+cHQqLZh+44zi5equ+8XglU7oJEkRlsEZ+GRuY+n7hfrLE+TZLGARH3QRNsXKzQbV0XjN4dyLHjBBMUEuA3AqQmi6Q1ZHjCjy2N+387+ccnlP/jcJfIeDFKFRXTKGdW9X02SCyyq40/L/yGH+3ZyS6j5krNIqvwD21nj41bFkmKlUFzTsRnEOestEEo1maNIo3nryD5Vxprvexqux0b/NcXBblcQ7EQXFQDHOWexF/ZhuJrRcxAJzLAA6ZsiPrsYO3Bbc/1Oz0j060EGU/UKevCiVveoI4T/Fnv5ssNr9q8+cWgrevUd14La381kzDu1QQqUpvv9lQ8ESz+kYEQzDPXJGH9+r5tP6O1QiV8ONddSaNx7R1Zc8f78S9wbNZFng11W9cx5x5hYUhI7BamaN7kjVeXYv8rp3bYMYjvozx/eUw0IAHbIE6OY4hTodMeLVvUwNSfj9DR+xP13GGoKjR0crYl3Qih5Wacwu8viKL7RbnuYiX6+wivpShwpBd0HYcTynDUx2a9rrEwISHxizO1qmxXNZO/cA6ZCphBhgHmLvqVrINDYEXIUUPZbjAy47lFMRkdqqKYuk8TmZXTRIROJA8TD434/bRr3Jjj3UqqcDrDCZkrySeDD2A7UrCFlDhI1AVq/IwmuXh23tZbMiJ/CP+a0XKNRSnmfw8IwWPEW2LGg+sUh11j0N9Ldr8LPh0Z1LcM5Ar8nfXeoKMScG/zSGnLjiHS2KJJD5wyixeHxU3QDJ9KikwZkce51O0cCSWyvVS5HMqVxEmXGIsNq0/oq47ipAtp9Lxz4JlTbYW/LJtMBMB3TzZa BvvgGZNh PypOwOuROv/bBIkyJ+XMsZQzsajnDxvovQ/EP4AV6bYA6UIjNzAJRm8nefcBL7/nYLjtk6CXnIX/SGqQalXN/6rO7kN8C+XCMfNKtotBzOqQJ0ewaaY3SQezVKOg12EtwsDaPJT/x4YsD+NW0d5iq9ySczvSroTrN6V7Umw/6OrrgxBdTwBBA97dj0/sANygHfud6EI2qs7ky1Oyaw5sa4UxpW1avS+hBrDvHdxDjGtu2UVI0UvRXyDYzCmLGHlhRp3GM7Fzg/pkSoMD75rwaTeGB5ZvQd3crkMLEVI65Jtyo9kQ8ZxfRD1oVNMc/YL0mFJcLcqAhIh+3CyQuLnri9Sz3EScUAJenJeDLnmUp4oNA8PBJUwEC4WKxImqUwagcKRDemtnQUTiGJVulhTZ1IvSd1w5IO9+6UUo2gcNo0dw92P1J1Wog6dBgQ81gnFPPR2OQKL755rXtfkI12D+XmoF2tqyYGiKmdZMjsbrSFWBNJDs6h10DFx8/7xc1nazuzvzYPHxqu6jAUO2EY4wa5Dc+xtfFlkfV2zRIXacNbFbuTVyRx7DB560HCF+3RQwsSvYGTx5S/xwi1qZG7gMKLD7w39FS8wwRDYDiEVqKCEEouxI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Enable handle_userfault to operate under VMA lock by releasing VMA lock instead of mmap_lock and retrying. Note that FAULT_FLAG_RETRY_NOWAIT should never be used when handling faults under per-VMA lock protection because that would break the assumption that lock is dropped on retry. Signed-off-by: Suren Baghdasaryan Acked-by: Peter Xu --- fs/userfaultfd.c | 34 ++++++++++++++-------------------- include/linux/mm.h | 26 ++++++++++++++++++++++++++ mm/memory.c | 20 +++++++++++--------- 3 files changed, 51 insertions(+), 29 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 4e800bb7d2ab..9d61e3e7da7b 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -277,17 +277,16 @@ static inline struct uffd_msg userfault_msg(unsigned long address, * hugepmd ranges. */ static inline bool userfaultfd_huge_must_wait(struct userfaultfd_ctx *ctx, - struct vm_area_struct *vma, - unsigned long address, - unsigned long flags, - unsigned long reason) + struct vm_fault *vmf, + unsigned long reason) { + struct vm_area_struct *vma = vmf->vma; pte_t *ptep, pte; bool ret = true; - mmap_assert_locked(ctx->mm); + assert_fault_locked(vmf); - ptep = hugetlb_walk(vma, address, vma_mmu_pagesize(vma)); + ptep = hugetlb_walk(vma, vmf->address, vma_mmu_pagesize(vma)); if (!ptep) goto out; @@ -308,10 +307,8 @@ static inline bool userfaultfd_huge_must_wait(struct userfaultfd_ctx *ctx, } #else static inline bool userfaultfd_huge_must_wait(struct userfaultfd_ctx *ctx, - struct vm_area_struct *vma, - unsigned long address, - unsigned long flags, - unsigned long reason) + struct vm_fault *vmf, + unsigned long reason) { return false; /* should never get here */ } @@ -325,11 +322,11 @@ static inline bool userfaultfd_huge_must_wait(struct userfaultfd_ctx *ctx, * threads. */ static inline bool userfaultfd_must_wait(struct userfaultfd_ctx *ctx, - unsigned long address, - unsigned long flags, + struct vm_fault *vmf, unsigned long reason) { struct mm_struct *mm = ctx->mm; + unsigned long address = vmf->address; pgd_t *pgd; p4d_t *p4d; pud_t *pud; @@ -337,7 +334,7 @@ static inline bool userfaultfd_must_wait(struct userfaultfd_ctx *ctx, pte_t *pte; bool ret = true; - mmap_assert_locked(mm); + assert_fault_locked(vmf); pgd = pgd_offset(mm, address); if (!pgd_present(*pgd)) @@ -445,7 +442,7 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason) * Coredumping runs without mmap_lock so we can only check that * the mmap_lock is held, if PF_DUMPCORE was not set. */ - mmap_assert_locked(mm); + assert_fault_locked(vmf); ctx = vma->vm_userfaultfd_ctx.ctx; if (!ctx) @@ -561,15 +558,12 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason) spin_unlock_irq(&ctx->fault_pending_wqh.lock); if (!is_vm_hugetlb_page(vma)) - must_wait = userfaultfd_must_wait(ctx, vmf->address, vmf->flags, - reason); + must_wait = userfaultfd_must_wait(ctx, vmf, reason); else - must_wait = userfaultfd_huge_must_wait(ctx, vma, - vmf->address, - vmf->flags, reason); + must_wait = userfaultfd_huge_must_wait(ctx, vmf, reason); if (is_vm_hugetlb_page(vma)) hugetlb_vma_unlock_read(vma); - mmap_read_unlock(mm); + release_fault_lock(vmf); if (likely(must_wait && !READ_ONCE(ctx->released))) { wake_up_poll(&ctx->fd_wqh, EPOLLIN); diff --git a/include/linux/mm.h b/include/linux/mm.h index bbaec479bf98..cd5389338def 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -705,6 +705,17 @@ static inline bool vma_try_start_write(struct vm_area_struct *vma) return true; } +static inline void vma_assert_locked(struct vm_area_struct *vma) +{ + int mm_lock_seq; + + if (__is_vma_write_locked(vma, &mm_lock_seq)) + return; + + lockdep_assert_held(&vma->vm_lock->lock); + VM_BUG_ON_VMA(!rwsem_is_locked(&vma->vm_lock->lock), vma); +} + static inline void vma_assert_write_locked(struct vm_area_struct *vma) { int mm_lock_seq; @@ -731,6 +742,15 @@ static inline void release_fault_lock(struct vm_fault *vmf) mmap_read_unlock(vmf->vma->vm_mm); } +static inline +void assert_fault_locked(struct vm_fault *vmf) +{ + if (vmf->flags & FAULT_FLAG_VMA_LOCK) + vma_assert_locked(vmf->vma); + else + mmap_assert_locked(vmf->vma->vm_mm); +} + #else /* CONFIG_PER_VMA_LOCK */ static inline void vma_init_lock(struct vm_area_struct *vma) {} @@ -749,6 +769,12 @@ static inline void release_fault_lock(struct vm_fault *vmf) mmap_read_unlock(vmf->vma->vm_mm); } +static inline +void assert_fault_locked(struct vm_fault *vmf) +{ + mmap_assert_locked(vmf->vma->vm_mm); +} + #endif /* CONFIG_PER_VMA_LOCK */ /* diff --git a/mm/memory.c b/mm/memory.c index 4fb8ecfc6d13..672f7383a622 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5202,6 +5202,17 @@ static vm_fault_t sanitize_fault_flags(struct vm_area_struct *vma, !is_cow_mapping(vma->vm_flags))) return VM_FAULT_SIGSEGV; } +#ifdef CONFIG_PER_VMA_LOCK + /* + * Per-VMA locks can't be used with FAULT_FLAG_RETRY_NOWAIT because of + * the assumption that lock is dropped on VM_FAULT_RETRY. + */ + if (WARN_ON_ONCE((*flags & + (FAULT_FLAG_VMA_LOCK | FAULT_FLAG_RETRY_NOWAIT)) == + (FAULT_FLAG_VMA_LOCK | FAULT_FLAG_RETRY_NOWAIT))) + return VM_FAULT_SIGSEGV; +#endif + return 0; } @@ -5294,15 +5305,6 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, if (!vma_start_read(vma)) goto inval; - /* - * Due to the possibility of userfault handler dropping mmap_lock, avoid - * it for now and fall back to page fault handling under mmap_lock. - */ - if (userfaultfd_armed(vma)) { - vma_end_read(vma); - goto inval; - } - /* Check since vm_start/vm_end might change before we lock the VMA */ if (unlikely(address < vma->vm_start || address >= vma->vm_end)) { vma_end_read(vma);