From patchwork Mon Oct 24 04:34:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 13016597 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C0CDC3A59D for ; Mon, 24 Oct 2022 04:35:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B64BE940008; Mon, 24 Oct 2022 00:35:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B151E940007; Mon, 24 Oct 2022 00:35:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A03CD940008; Mon, 24 Oct 2022 00:35:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 92190940007 for ; Mon, 24 Oct 2022 00:35:03 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 6B8F61A069A for ; Mon, 24 Oct 2022 04:35:03 +0000 (UTC) X-FDA: 80054578086.06.8EF5600 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf29.hostedemail.com (Postfix) with ESMTP id E0A2A120012 for ; Mon, 24 Oct 2022 04:35:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666586103; x=1698122103; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=AZBlPxqq4LjioAqMcl77X4b+mE9LrGUprIKCn0OV7KE=; b=RpllPND6leZ47iswTG1vrpqtautaEAhYmuPwttEUJPqyssyxWgYOJH/a mwO/skVvSpek6QL6gMso4z4hWfatVF4ZBOl0GIfxVK+KmIZuMlZMTx1wh nVZdsIZ5cQlFb3fRtf2J8AKzGO+mylfJYqUxSYTCthmdvHq+xeti0P+JN 7I6+Rf3iIRoKYy4C2ZsEQPlz72LQjQHWhb/oZD6DL5YzRjYow2mA3li1L MZ6W/1mgl+XjX7IwKKcsnf2ELVmZHBRqjhhhKnGznVC6pg4mW8AnoDLgQ Ubgb2FJu8KtuIt6lHuSCgi00n5C8bmFjtRUV7rF6DFgoSO8cQxlEBkoYe Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10509"; a="393650122" X-IronPort-AV: E=Sophos;i="5.95,207,1661842800"; d="scan'208";a="393650122" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Oct 2022 21:35:01 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10509"; a="736273355" X-IronPort-AV: E=Sophos;i="5.95,207,1661842800"; d="scan'208";a="736273355" Received: from iweiny-mobl.amr.corp.intel.com (HELO localhost) ([10.252.130.95]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Oct 2022 21:35:01 -0700 From: ira.weiny@intel.com To: Andrew Morton Cc: Ira Weiny , Matthew Wilcox , Andrea Arcangeli , Peter Xu , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH] mm/userfaultfd: Replace kmap/kmap_atomic() with kmap_local_page() Date: Sun, 23 Oct 2022 21:34:52 -0700 Message-Id: <20221024043452.1491677-1-ira.weiny@intel.com> X-Mailer: git-send-email 2.37.2 MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666586103; a=rsa-sha256; cv=none; b=INz2XlQq1+JoDvGhDWMB4Mtu2m9JUgK22ali3YW/aDADoUwE5r/EzQeXABZW2sV5eDsgQ+ XzotXyVoM1wHFJQB8CWcxUjJiKvXrMvJGOYnOCfMMooVRXzbjINd+HB8PiJml4xJxvaHSM 6oQBFkOxnLroEnS4Cczw26GBdEDCsxg= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=RpllPND6; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf29.hostedemail.com: domain of ira.weiny@intel.com designates 192.55.52.43 as permitted sender) smtp.mailfrom=ira.weiny@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666586103; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=Z5T0HkrC8Wh0MWwPFosjvdPKgyHntMfG04nB9fIIdp4=; b=SjJQc8GtZRqgFKhi/qyMqnYNE5mPTVnTi46UMTM9cJmHOPKvldQaqlA8hkJCGdq9Vypd0/ kq3JE4RasO5fnhhUsOzjkNOxuKSIc15uWm4p+VEwX+r1elw45+nIWy15KYOWlRtkoexqZ5 l+3IU4mamU+6Y9YliYG0iQpq75ObWVg= Authentication-Results: imf29.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=RpllPND6; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf29.hostedemail.com: domain of ira.weiny@intel.com designates 192.55.52.43 as permitted sender) smtp.mailfrom=ira.weiny@intel.com X-Rspamd-Server: rspam04 X-Rspam-User: X-Stat-Signature: 666femhi94hx58zuuuan1jh6k5163pbe X-Rspamd-Queue-Id: E0A2A120012 X-HE-Tag: 1666586102-365287 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Ira Weiny kmap() and kmap_atomic() are being deprecated in favor of kmap_local_page() which is appropriate for any thread local context.[1] A recent locking bug report with userfaultfd showed that the conversion of these kmap_atomic()'s required care with regard to the prevention of deadlock.[2] Complete kmap conversion in userfaultfd by replacing the kmap() and kmap_atomic() calls with kmap_local_page(). When replacing the kmap_atomic() call ensure page faults continue to be disabled to support the correct fall back behavior and add a comment to inform future souls of the requirement. [1] https://lore.kernel.org/all/20220813220034.806698-1-ira.weiny@intel.com/ [2] https://lore.kernel.org/all/Y1Mh2S7fUGQ%2FiKFR@iweiny-desk3/ Cc: Matthew Wilcox Cc: Andrew Morton Cc: Andrea Arcangeli Signed-off-by: Ira Weiny --- mm/userfaultfd.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index e24e8a47ce8a..c5db06f4d28d 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -157,11 +157,18 @@ static int mcopy_atomic_pte(struct mm_struct *dst_mm, if (!page) goto out; - page_kaddr = kmap_atomic(page); + page_kaddr = kmap_local_page(page); + /* + * The mmap_lock is held here. Disable page faults to + * prevent deadlock should copy_from_user() fault. The + * copy will be retried outside the mmap_lock. + */ + pagefault_disable(); ret = copy_from_user(page_kaddr, (const void __user *) src_addr, PAGE_SIZE); - kunmap_atomic(page_kaddr); + pagefault_enable(); + kunmap_local(page_kaddr); /* fallback to copy_from_user outside mmap_lock */ if (unlikely(ret)) { @@ -646,11 +653,11 @@ static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm, mmap_read_unlock(dst_mm); BUG_ON(!page); - page_kaddr = kmap(page); + page_kaddr = kmap_local_page(page); err = copy_from_user(page_kaddr, (const void __user *) src_addr, PAGE_SIZE); - kunmap(page); + kunmap_local(page_kaddr); if (unlikely(err)) { err = -EFAULT; goto out;