From patchwork Mon Oct 24 04:33:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 13016596 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 172D2C3A59D for ; Mon, 24 Oct 2022 04:33:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 39E06940008; Mon, 24 Oct 2022 00:33:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 34E84940007; Mon, 24 Oct 2022 00:33:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 21782940008; Mon, 24 Oct 2022 00:33:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 10EB0940007 for ; Mon, 24 Oct 2022 00:33:18 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id C3DF8804C8 for ; Mon, 24 Oct 2022 04:33:17 +0000 (UTC) X-FDA: 80054573634.28.8A5E1A2 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by imf07.hostedemail.com (Postfix) with ESMTP id F35214000A for ; Mon, 24 Oct 2022 04:33:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666585996; x=1698121996; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=ObwjONVUV1m6qgO+y0lFIH+jDLUNTJNwafsT6+bksmU=; b=R4WFgaoQuP8rCEIm47jQS9dO3ysnjgs1Trb782rOm6W7TxH0BN4L2aYA 36D/4yTwUpNMdWWEng557Z4ZxhrTvFdnrUfkObFTn0DciwubzrdxC98mt GDrj53ORmps0hf5Ji558rj/RkWubhFKIE1cwRVqRkaF0/Q2bYEuTa1GHr O6atJJMEspqw7TMI+rYVuiTobUwP3Fnk425D2MIiy1ynWN9LdTOro7Ajj NQf0C8IzNJYBTsFAr7BXJTjFbyaUuNrolXlAvJDrBqyrbdvmmoDMSkB01 WmIhY/gmt3U2Irq73Y5Zq9ayTGsmrzMhz/csJGc1bnE9U4Enyl3ktW2n6 g==; X-IronPort-AV: E=McAfee;i="6500,9779,10509"; a="307327243" X-IronPort-AV: E=Sophos;i="5.95,207,1661842800"; d="scan'208";a="307327243" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Oct 2022 21:33:14 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10509"; a="694424439" X-IronPort-AV: E=Sophos;i="5.95,207,1661842800"; d="scan'208";a="694424439" Received: from iweiny-mobl.amr.corp.intel.com (HELO localhost) ([10.252.130.95]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Oct 2022 21:33:12 -0700 From: ira.weiny@intel.com To: Andrew Morton Cc: Ira Weiny , Randy Dunlap , Peter Xu , Andrea Arcangeli , Matthew Wilcox , kernel test robot , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH for rc] mm/shmem: Ensure proper fallback if page faults Date: Sun, 23 Oct 2022 21:33:05 -0700 Message-Id: <20221024043305.1491403-1-ira.weiny@intel.com> X-Mailer: git-send-email 2.37.2 MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666585996; a=rsa-sha256; cv=none; b=eJ7lRbY/z7l6mQKF6cptrOSws6ZkBZFYHllWYbJoPp6smwCl1EZSfCh/euNsN3L8SOFV00 kbemXd+7WbWu5msSq+rbGoQGqsnXzDnXpGI5Kl8IIA7rDD2xCzVEn9b9CPyM8TajxKIa+I J2eN8C+iPWuhqy4MW47HaFVM8zF/0Ac= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=R4WFgaoQ; spf=pass (imf07.hostedemail.com: domain of ira.weiny@intel.com designates 192.55.52.115 as permitted sender) smtp.mailfrom=ira.weiny@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666585996; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=2oIag8Xv2h+YiHkF0FTwdKhoNgI0xFurHOTOrtKP0L0=; b=Zes3k38HKC8eie/SUOPK7LEYx8jKR9dCgfENqknu82ONGHSnfTNF+e0SzsPEqJvLOAlWhn 6Su04RcWv6abnjVZKUc6FTYJUPnUswap6vyqUhdHYZ53L+8VcXuvSrokBI5AKtWwV0B6p6 UqR6DY2L1cDdMqO1DkofIhWM8Gy22tQ= X-Stat-Signature: 94csyq4cbrhiyxaefkcct131qiyd4wmn X-Rspamd-Queue-Id: F35214000A X-Rspam-User: X-Rspamd-Server: rspam03 Authentication-Results: imf07.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=R4WFgaoQ; spf=pass (imf07.hostedemail.com: domain of ira.weiny@intel.com designates 192.55.52.115 as permitted sender) smtp.mailfrom=ira.weiny@intel.com; dmarc=pass (policy=none) header.from=intel.com X-HE-Tag: 1666585995-429153 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Ira Weiny The kernel test robot flagged a recursive lock as a result of a conversion from kmap_atomic() to kmap_local_folio()[Link] The cause was due to the code depending on the kmap_atomic() side effect of disabling page faults. In that case the code expects the fault to fail and take the fallback case. git archaeology implied that the recursion may not be an actual bug.[1] However, the mmap_lock needed in the fault may be the one held.[2] Add an explicit pagefault_disable() and a big comment to explain this for future souls looking at this code. [1] https://lore.kernel.org/all/Y1MymJ%2FINb45AdaY@iweiny-desk3/ [2] https://lore.kernel.org/all/Y1M2p9OtBGnKwGUE@x1n/ Fixes: 7a7256d5f512 ("shmem: convert shmem_mfill_atomic_pte() to use a folio") Cc: Andrew Morton Cc: Randy Dunlap Cc: Peter Xu Cc: Andrea Arcangeli Reported-by: Matthew Wilcox (Oracle) Reported-by: kernel test robot Link: https://lore.kernel.org/r/202210211215.9dc6efb5-yujie.liu@intel.com Signed-off-by: Ira Weiny --- Thanks to Matt and Andrew for initial diagnosis. Thanks to Randy for pointing out C code needs ';' :-D Thanks to Andrew for suggesting an elaborate comment Thanks to Peter for pointing out that the mm's may be the same. --- mm/shmem.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/mm/shmem.c b/mm/shmem.c index 8280a5cb48df..c1bca31cd485 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2424,9 +2424,16 @@ int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, if (!zeropage) { /* COPY */ page_kaddr = kmap_local_folio(folio, 0); + /* + * The mmap_lock is held here. Disable page faults to + * prevent deadlock should copy_from_user() fault. The + * copy will be retried outside the mmap_lock. + */ + pagefault_disable(); ret = copy_from_user(page_kaddr, (const void __user *)src_addr, PAGE_SIZE); + pagefault_enable(); kunmap_local(page_kaddr); /* fallback to copy_from_user outside mmap_lock */