From patchwork Thu Aug 17 06:49:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kasireddy, Vivek" X-Patchwork-Id: 13356022 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50274C2FC14 for ; Thu, 17 Aug 2023 07:10:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C6400280024; Thu, 17 Aug 2023 03:10:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C146D280023; Thu, 17 Aug 2023 03:10:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ADD80280024; Thu, 17 Aug 2023 03:10:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id A07C2280023 for ; Thu, 17 Aug 2023 03:10:22 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 6291A120FEB for ; Thu, 17 Aug 2023 07:10:22 +0000 (UTC) X-FDA: 81132723084.05.2A10B09 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.93]) by imf21.hostedemail.com (Postfix) with ESMTP id 4C9A41C0022 for ; Thu, 17 Aug 2023 07:10:20 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=SZ6oeVXP; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf21.hostedemail.com: domain of vivek.kasireddy@intel.com designates 192.55.52.93 as permitted sender) smtp.mailfrom=vivek.kasireddy@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692256220; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=HSbfcFRzWKnUPIMwRtvbNjhz2JF5a97flZcJGyMLD/E=; b=NMTKsHMLPRVwVOaMvfNLr5dVqpGaO+LO+P11g5oZ9H+aZjD1DjO1CabJA2rTyRvDiypDYv +01NAGYhciZ27noCpxuq1z5vtdDi1wDTGCZ4ezOGwv+Z4GFt3gKNAFcEuNhysgYwSpNRQ+ jcTvWJLSO8t/7xC1fyjujc5pFdtVZu8= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=SZ6oeVXP; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf21.hostedemail.com: domain of vivek.kasireddy@intel.com designates 192.55.52.93 as permitted sender) smtp.mailfrom=vivek.kasireddy@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692256220; a=rsa-sha256; cv=none; b=xTnvbAo9dHXH4No37C68K8yE/f53+g7yBkMsrbyGzwSSolODycogpPdcvKIHomJUi9pc/0 s3xQ7byvvexhCfgKd2wTNlzVbJzI3nQuN9ddk+fuUH1nrQX/0LbytZjdy4YqlTeU/ek3bg r97cw4fk3ravtDWTuE/OEz1BcnLGpzA= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1692256220; x=1723792220; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=JjzPYPUuROIA08YPCfDWkzc6f7ow2v3WRVW39hUB6/A=; b=SZ6oeVXPhc7JCQMgsmsZ7Sfa1GN1v9qVhw5hczAbtvjWpGGAS/bL9OqE DVChFS/xR5GSvyJKwoPj8uTaxsp0HuD9X9761mZj/e8qvv6glR3Z4jFKk k6X6FBu//3z3lD+XasBv/N6YmmYPJadGJfQVL5nXv455+4qP8qVV9C26F JrzUYjq1Yo3uqGLD4YNjAzoyDcadxMRhMdIBe65zxE+OWIgzOteGBcS/c JD1VLSVN9RUmlLToY5ryMRm/jodfm5rnZ+vP5zhA0Xrm7HBWEWtcr6sPr 0PNg7568d8BK8quLH3QuqsMFT0vRuJ8PPBNK0H3HxQosBISnX7jZFZ5Ii g==; X-IronPort-AV: E=McAfee;i="6600,9927,10803"; a="370200883" X-IronPort-AV: E=Sophos;i="6.01,179,1684825200"; d="scan'208";a="370200883" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Aug 2023 00:10:18 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10803"; a="1065142182" X-IronPort-AV: E=Sophos;i="6.01,179,1684825200"; d="scan'208";a="1065142182" Received: from vkasired-desk2.fm.intel.com ([10.105.128.127]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Aug 2023 00:10:18 -0700 From: Vivek Kasireddy To: dri-devel@lists.freedesktop.org, linux-mm@kvack.org Cc: Vivek Kasireddy , David Hildenbrand , Daniel Vetter , Mike Kravetz , Hugh Dickins , Peter Xu , Jason Gunthorpe , Gerd Hoffmann , Dongwon Kim , Junxiao Chang Subject: [PATCH v1 1/3] mm/gup: Export check_and_migrate_movable_pages() Date: Wed, 16 Aug 2023 23:49:32 -0700 Message-Id: <20230817064934.3424431-2-vivek.kasireddy@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230817064934.3424431-1-vivek.kasireddy@intel.com> References: <20230817064934.3424431-1-vivek.kasireddy@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 4C9A41C0022 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: ysnbrkbqxgaofryuihsp6mfnqxp5d3e4 X-HE-Tag: 1692256220-532632 X-HE-Meta: U2FsdGVkX1+GxhE0sNAY1CzMfGg5Yxb1kv7oSLDu4PrxOBr8zmPxbIX9vzXmbBQuaUiJJuSQUZ/2NRQ/8BUEasaphehBCBW+CULMvSzbuCl6Ift+LNgaBigh2UMBBUH7vsIgGmi8FY2Of9dH4YaeTMHHV8ZsDToNXrPu/2MZGDLcSwJdKgw6RdHYc4bEtb6qReUaadkbiv1JNQksgZDH+CGgxEf6dWMxnY7MsOGkZKutkUtoi9Zq3usilDvgZLBBXkWhIbRpm8ADKmLft4cJdR3eZ3rDNSjQUxCNdT1xkvGsD+vEMj+3crUNNRvCB8bB0+5e/y2nNY0iTOyxHU4t0806zW5+61Wx8sdOF6uumidxtTcPiqIBl6th4Bu+016zlPxyxQ29OV+Je/oDjJFSlle+9DZWMzB9fWm7TeIt8YnEglQ+AbRrbBP929662WYqSkg2Y3/xe4ym0uAMm3mU1S9gFd5XA3lRvIqsNkucDww1DRMO0V6imjz27LFPMiDOzxx4XqK/cHfpVJt9Oa7cxekVPNGaOPeLsPNLXBZMy9DFij7tr1XHFgNDYN2399PYZ9GTstwjBxir8yne9khRAFE9oih9XTqSs8VCgq/F3thPUVU1ZEQJvfoFe4VAAajo3jH5vREK6jp+SXQGgfFniwMyDAcjpQFA3gIuCVRmSzUfGJEJCwoi7oHYMv3NSPpcMR8LLh01PfBnKVq4GF/xXkuas4cvff/nJEhVP5BmCsxePjtZeKqr/qx/dWLLvaLE07/z/SEUne0bvrOml0RvUV2t83f4EZWRkCm0jcwLc+VnB+6VCO/zIX1oK6OP9enYfmkqCCAc4m7YIOE8y2KmplkuKKGE+hW1uiAUkfepPxXCpWKBr7+DabGvOH98lXfjmCllt6VCkVGnvWE4Jt81Vigvefl3tt+S61MzMNek/NM1RUxw8VrBNajf5LdYbbIJjZI1rK7pfe9lRlK6oPu RiJDfJ+1 dqhq4uoWNWYEm7t+KCdPy4saZHFvt5WJe+7WoALatYJFcM5rPycGNdR4mg5YNLlCBwrpje9B9Djru/nZxNDj7mILxZDkTnHiISwUDbgiL2bAI58asvs97W40lwxDZrK/BgGOHddfCTyLAp94BDM4vNzaNk5JrUjeF/6qyfFmV0hjlD0D3uUvPUDjc+8LGW6i8FrN9EHz/05tB4kcMknDZkmCNUGCg94vsE2P6dUGqdv8AoWiY64Gwmzcuc4soM11UTN11Gn56xzXC3EM7iD1lgr2wfBQrhi1iDLgDjkuSdErRYUc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: For drivers that would like to migrate pages out of the movable zone (or CMA) in order to pin them (longterm) for DMA, using check_and_migrate_movable_pages() directly provides a convenient option instead of duplicating similar checks (e.g, checking the folios for zone, hugetlb, etc) and calling migrate_pages() directly. Ideally, a driver is expected to call pin_user_pages(FOLL_LONGTERM) to migrate and pin the pages for longterm DMA but there are situations where the GUP APIs cannot be used directly for various reasons (e.g, when the VMA or start addr cannot be easily determined but the relevant pages are available). Cc: David Hildenbrand Cc: Daniel Vetter Cc: Mike Kravetz Cc: Hugh Dickins Cc: Peter Xu Cc: Jason Gunthorpe Cc: Gerd Hoffmann Cc: Dongwon Kim Cc: Junxiao Chang Signed-off-by: Vivek Kasireddy --- include/linux/mm.h | 2 ++ mm/gup.c | 9 +++++---- 2 files changed, 7 insertions(+), 4 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 406ab9ea818f..81871ffd3ff9 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1547,6 +1547,8 @@ void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages, void unpin_user_page_range_dirty_lock(struct page *page, unsigned long npages, bool make_dirty); void unpin_user_pages(struct page **pages, unsigned long npages); +long check_and_migrate_movable_pages(unsigned long nr_pages, + struct page **pages); static inline bool is_cow_mapping(vm_flags_t flags) { diff --git a/mm/gup.c b/mm/gup.c index 76d222ccc3ff..18beda89fcf3 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2141,8 +2141,8 @@ static int migrate_longterm_unpinnable_pages( * If everything is OK and all pages in the range are allowed to be pinned, then * this routine leaves all pages pinned and returns zero for success. */ -static long check_and_migrate_movable_pages(unsigned long nr_pages, - struct page **pages) +long check_and_migrate_movable_pages(unsigned long nr_pages, + struct page **pages) { unsigned long collected; LIST_HEAD(movable_page_list); @@ -2156,12 +2156,13 @@ static long check_and_migrate_movable_pages(unsigned long nr_pages, pages); } #else -static long check_and_migrate_movable_pages(unsigned long nr_pages, - struct page **pages) +long check_and_migrate_movable_pages(unsigned long nr_pages, + struct page **pages) { return 0; } #endif /* CONFIG_MIGRATION */ +EXPORT_SYMBOL(check_and_migrate_movable_pages); /* * __gup_longterm_locked() is a wrapper for __get_user_pages_locked which From patchwork Thu Aug 17 06:49:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kasireddy, Vivek" X-Patchwork-Id: 13356024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1EEAC2FC14 for ; Thu, 17 Aug 2023 07:10:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3646828002B; Thu, 17 Aug 2023 03:10:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2F092280023; Thu, 17 Aug 2023 03:10:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 141AB28002B; Thu, 17 Aug 2023 03:10:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 03ABB280023 for ; Thu, 17 Aug 2023 03:10:26 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 5F060A0A4E for ; Thu, 17 Aug 2023 07:10:25 +0000 (UTC) X-FDA: 81132723210.12.19B7C05 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.93]) by imf01.hostedemail.com (Postfix) with ESMTP id 46BE74000A for ; Thu, 17 Aug 2023 07:10:23 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=ZUclMw+h; spf=pass (imf01.hostedemail.com: domain of vivek.kasireddy@intel.com designates 192.55.52.93 as permitted sender) smtp.mailfrom=vivek.kasireddy@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692256223; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=D30LPhLJJoUiLHxPbQgqbqme+/wquh7si0+3vz1xKNY=; b=8dVhFYKhAeK0D/bjWlycyV3eYNVhAqnj1i+kBnjmeeUTTISr65uhPDRktlef3qrq0zPMwJ b8ng+mNPszc58W6XLd7VgVi/DkkPSj0/stEVpIx8CnXm29TgJxYDB5984ClwrO73oggQhz Q6Ca+0/eUhjbg0kqCyZEf2UE2XvTAns= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692256223; a=rsa-sha256; cv=none; b=lfY5qXxVZ6K9l9WlUOzKdSQjX1I6U+ytQkVDzLBAB3b6CG5eY5jrAhw7Jz83uePgs+VnNB 6O6sT1gPwZTQRSMo1FvTY7neUWkWyXT+0P+9+Iz7s+00kD7xMt7lA6FYEmi8HUtt8ANLVI zvZn1scZHdHAPbY1+F//mS6y8rvy+Ws= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=ZUclMw+h; spf=pass (imf01.hostedemail.com: domain of vivek.kasireddy@intel.com designates 192.55.52.93 as permitted sender) smtp.mailfrom=vivek.kasireddy@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1692256223; x=1723792223; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=oHz4gJ1RUR72JutajLhDLQnJNp66EtQNsodmpCFgLug=; b=ZUclMw+h8BQvw/7oyqD0hvPZJX7ZvjP3n1MznxmkISq7tB7pu6hma4hF BQRhBUdYC6Me5hV2XeXj6+s2uhz/xerTZrsi8fCV60iNVxPyMYB+NT2Rh BFOx4ojX+Fhz/HVxgK19BZh/G++xu+hUtH5Jbc4IbQPzuN/4Ftfgj0X2H HmC+Ov8Fa28h6uJfbUP6L4BsaCzMHugQKmFRDOYRwuojNvA1ujpc6rU7O zdl4nwSkTn9Wysv21nY1+OyJDPfuBwAvZyWQg8vX/rlEBrgTKXnX/QTRk M6eYOGD5wt1zdg7u7xd6uX/7ixKT9t2qE+sEfCJvu6h48l/RbLjRkAfLj A==; X-IronPort-AV: E=McAfee;i="6600,9927,10803"; a="370200890" X-IronPort-AV: E=Sophos;i="6.01,179,1684825200"; d="scan'208";a="370200890" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Aug 2023 00:10:18 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10803"; a="1065142185" X-IronPort-AV: E=Sophos;i="6.01,179,1684825200"; d="scan'208";a="1065142185" Received: from vkasired-desk2.fm.intel.com ([10.105.128.127]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Aug 2023 00:10:18 -0700 From: Vivek Kasireddy To: dri-devel@lists.freedesktop.org, linux-mm@kvack.org Cc: Vivek Kasireddy , David Hildenbrand , Daniel Vetter , Mike Kravetz , Hugh Dickins , Peter Xu , Jason Gunthorpe , Gerd Hoffmann , Dongwon Kim , Junxiao Chang Subject: [PATCH v1 2/3] udmabuf: Add support for page migration out of movable zone or CMA Date: Wed, 16 Aug 2023 23:49:33 -0700 Message-Id: <20230817064934.3424431-3-vivek.kasireddy@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230817064934.3424431-1-vivek.kasireddy@intel.com> References: <20230817064934.3424431-1-vivek.kasireddy@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 46BE74000A X-Rspam-User: X-Stat-Signature: jexj4tojw6t8gzzkdm513kteehge5b7h X-Rspamd-Server: rspam03 X-HE-Tag: 1692256223-560717 X-HE-Meta: U2FsdGVkX18Q9TNpGCoyLpy/lOX5FO2Sf4Q+dp/8NOy4IRRRkTYAMt2R9xBFetrzAVdFV059EHYveIb4m0wCpFG0Yo0g6mVDeCmX0qCzPLG/AG8aJNlLuhs7k/8ExFH7IsFQH2NAJBNLgkhSNUtXe66vzObDLg6+eaXZd+gzPomECWu8P7fpgD3lccCCZ8vLrIc00/l/rf/sSgouXTT+B/DyS34MnxXPH9YL06triAi2+l7M0P4kzDfd+a9beb/8FKuxLac1/1qQhQxXVeVgtFOP7DaKaHopSvpi2fH0b3Tc4cqPUX+y9PfJtnvJ5sMNpVN0TUewhUae9tuIyXG7u7AqhqhBWhE4ctoHiNtc4Pp/Lt5dFOffqfhcn167m1LC4MYyTds4PEwlYSJ7I6Hc+SNcxlNaL63v3p5ZODCsetgrx/Z3AwNUOvS8ulI7ipSbP5poqK36DCFafPvxkDLfNLnmIKwXySyUCvWluVOH/ArZDN8O3QJAL15mu+VIv8WqKzGwgRZSJP4L+9Jzj2IUP7R14RASn2c3e0a7iV3SKMmsfFSQof7kv6vBoT5+L9UgtjQGTAycZ6hk/6JJHH6OPj4bkafqIyynlK1GMqMi7jXBiRmg1m2tAPXwjHRrZvm+3LIQ5PY8HHdaJoPPN4CEua++GLH2U61k+xrn4B/b4mgA9jQPd2+CEsUBKTL3j0F6nMakwVNHQ7ZpzE6Sd+DNIiwDzhwZy42DO2Cq2tRKSpUvgXsgKVHGR2HpDWKnj6IAaNyxZldK5KSAOVnFJKJ6WhK3GDRM7vysoNgUFbCnQMKUfwNqdY60jFIweXPnX9eDTIIBJRA54wlhcSpvuBYbZ0NHfUe2ZGGh5XIuUacKK8pDLYF1uxeGecSdAqIPR0lUwXJQvR1Zg/lj4IXrFQ2KHOn87Obs2Z5+MnQilMIe/f/9bcTSHb1yskbBgIjOH4PNyTipkD3Ed6B0CRNH/eq wznS2vso dy49HY7kZLXyJeZUDcvCubJG3KntM/lzVAb9tG7aTHNodjAoPjsLKRWt3ubzRdPoQppyGGzALLU5UVIV1lrQcuoDG+WxELDjz9NkP5Jaa6fmaduoSBWrsHCv96syK4T5CbQWV1VlE8MzgzHEwkk5Jg3LeNNNBV6Ay50+BNVPWHtNNP4oCmLgDhBGf0MB7KcVK+K4L7l3MFJXWTVQMZOE8Bs/bApsUSY40UmeRQi74v1VT2pynrVEAB0BS9A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since udmabuf could potentially pin pages that may reside in the movable zone or CMA and thereby break features such as memory hotunplug, it makes sense to migrate the pages out of these areas. In order to accomplish this, we note the mapping and the index of each page and then call check_and_migrate_movable_pages(). As check_and_migrate_movable_pages() unpins all the pages (and also replaces the migrated pages in the mapping) upon successful migration, we need to retrieve all the pages from their associated mapping using the index we noted down earlier and re-pin them again. Cc: David Hildenbrand Cc: Daniel Vetter Cc: Mike Kravetz Cc: Hugh Dickins Cc: Peter Xu Cc: Jason Gunthorpe Cc: Gerd Hoffmann Cc: Dongwon Kim Cc: Junxiao Chang Suggested-by: David Hildenbrand Signed-off-by: Vivek Kasireddy --- drivers/dma-buf/udmabuf.c | 106 +++++++++++++++++++++++++++++++++++--- 1 file changed, 100 insertions(+), 6 deletions(-) diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c index 1a41c4a069ea..63912c73d122 100644 --- a/drivers/dma-buf/udmabuf.c +++ b/drivers/dma-buf/udmabuf.c @@ -30,6 +30,12 @@ struct udmabuf { struct sg_table *sg; struct miscdevice *device; pgoff_t *subpgoff; + struct udmabuf_backing_info *backing; +}; + +struct udmabuf_backing_info { + struct address_space *mapping; + pgoff_t mapidx; }; static vm_fault_t udmabuf_vm_fault(struct vm_fault *vmf) @@ -156,8 +162,10 @@ static void release_udmabuf(struct dma_buf *buf) put_sg_table(dev, ubuf->sg, DMA_BIDIRECTIONAL); for (pg = 0; pg < ubuf->pagecount; pg++) - put_page(ubuf->pages[pg]); + unpin_user_page(ubuf->pages[pg]); + kfree(ubuf->subpgoff); + kfree(ubuf->backing); kfree(ubuf->pages); kfree(ubuf); } @@ -211,6 +219,76 @@ static const struct dma_buf_ops udmabuf_ops = { #define SEALS_WANTED (F_SEAL_SHRINK) #define SEALS_DENIED (F_SEAL_WRITE) +static int udmabuf_pin_pages(struct udmabuf *ubuf) +{ + struct address_space *mapping; + struct folio *folio; + struct page *page; + pgoff_t pg, mapidx; + int ret; + + for (pg = 0; pg < ubuf->pagecount; pg++) { + mapping = ubuf->backing[pg].mapping; + mapidx = ubuf->backing[pg].mapidx; + + if (!ubuf->pages[pg]) { + page = find_get_page_flags(mapping, mapidx, + FGP_ACCESSED); + if (!page) { + if (!shmem_mapping(mapping)) { + ret = -EINVAL; + goto err; + } + + page = shmem_read_mapping_page(mapping, + mapidx); + if (IS_ERR(page)) { + ret = PTR_ERR(page); + goto err; + } + } + ubuf->pages[pg] = page; + } + + folio = page_folio(ubuf->pages[pg]); + if (folio_test_large(folio)) + atomic_add(1, &folio->_pincount); + else + folio_ref_add(folio, GUP_PIN_COUNTING_BIAS); + + /* Since we are doing the equivalent of FOLL_PIN above, we can + * go ahead and release our (udmabuf) reference on the pages. + * Otherwise, migrate_pages() will fail as it doesn't like the + * extra reference. + */ + put_page(ubuf->pages[pg]); + } + return 0; + +err: + while (pg > 0 && ubuf->pages[--pg]) { + unpin_user_page(ubuf->pages[pg]); + ubuf->pages[pg] = NULL; + } + return ret; +} + +static long udmabuf_migrate_pages(struct udmabuf *ubuf) +{ + long ret; + + do { + ret = udmabuf_pin_pages(ubuf); + if (ret < 0) + break; + + ret = check_and_migrate_movable_pages(ubuf->pagecount, + ubuf->pages); + } while (ret == -EAGAIN); + + return ret; +} + static long udmabuf_create(struct miscdevice *device, struct udmabuf_create_list *head, struct udmabuf_create_item *list) @@ -224,7 +302,8 @@ static long udmabuf_create(struct miscdevice *device, struct page *page, *hpage = NULL; pgoff_t mapidx, chunkoff, maxchunks; struct hstate *hpstate; - int seals, ret = -EINVAL; + long ret = -EINVAL; + int seals; u32 i, flags; ubuf = kzalloc(sizeof(*ubuf), GFP_KERNEL); @@ -252,6 +331,13 @@ static long udmabuf_create(struct miscdevice *device, goto err; } + ubuf->backing = kmalloc_array(ubuf->pagecount, sizeof(*ubuf->backing), + GFP_KERNEL); + if (!ubuf->backing) { + ret = -ENOMEM; + goto err; + } + pgbuf = 0; for (i = 0; i < head->count; i++) { ret = -EBADFD; @@ -298,7 +384,8 @@ static long udmabuf_create(struct miscdevice *device, } get_page(hpage); ubuf->pages[pgbuf] = hpage; - ubuf->subpgoff[pgbuf++] = chunkoff << PAGE_SHIFT; + ubuf->subpgoff[pgbuf] = chunkoff << PAGE_SHIFT; + ubuf->backing[pgbuf].mapidx = mapidx; if (++chunkoff == maxchunks) { put_page(hpage); hpage = NULL; @@ -312,8 +399,10 @@ static long udmabuf_create(struct miscdevice *device, ret = PTR_ERR(page); goto err; } - ubuf->pages[pgbuf++] = page; + ubuf->pages[pgbuf] = page; + ubuf->backing[pgbuf].mapidx = mapidx; } + ubuf->backing[pgbuf++].mapping = mapping; } fput(memfd); memfd = NULL; @@ -323,6 +412,10 @@ static long udmabuf_create(struct miscdevice *device, } } + ret = udmabuf_migrate_pages(ubuf); + if (ret < 0) + goto err; + exp_info.ops = &udmabuf_ops; exp_info.size = ubuf->pagecount << PAGE_SHIFT; exp_info.priv = ubuf; @@ -341,11 +434,12 @@ static long udmabuf_create(struct miscdevice *device, return dma_buf_fd(buf, flags); err: - while (pgbuf > 0) - put_page(ubuf->pages[--pgbuf]); + while (pgbuf > 0 && ubuf->pages[--pgbuf]) + put_page(ubuf->pages[pgbuf]); if (memfd) fput(memfd); kfree(ubuf->subpgoff); + kfree(ubuf->backing); kfree(ubuf->pages); kfree(ubuf); return ret; From patchwork Thu Aug 17 06:49:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kasireddy, Vivek" X-Patchwork-Id: 13356025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE2E4EB64DD for ; Thu, 17 Aug 2023 07:10:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1419B28002C; Thu, 17 Aug 2023 03:10:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0F2A5280023; Thu, 17 Aug 2023 03:10:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ED37F28002C; Thu, 17 Aug 2023 03:10:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id D6FEF280023 for ; Thu, 17 Aug 2023 03:10:26 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id A63BEA1018 for ; Thu, 17 Aug 2023 07:10:26 +0000 (UTC) X-FDA: 81132723252.28.C1DE0B2 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.93]) by imf21.hostedemail.com (Postfix) with ESMTP id 8EF131C000C for ; Thu, 17 Aug 2023 07:10:24 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=LV50ghc2; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf21.hostedemail.com: domain of vivek.kasireddy@intel.com designates 192.55.52.93 as permitted sender) smtp.mailfrom=vivek.kasireddy@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692256224; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=h3emmYlObQz4YBWvxea89j9SxvY/uMNOIgA3BV8wvNU=; b=MagQH1xLVsly+Q6kaFNo2FwXAzGFkyXZVLONdBHd5rEpoLhJgEcOaYxdzAecSDGgunGZJa Mg1RK9ig7SVXQETqTKNYpFkY07ipIJ9mulAH4lF8V4QjeuDKs61iWIQjiutTZ0IRJmIos1 NR027smyWyxE7HkgmbRwuf1/hnnSe/E= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=LV50ghc2; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf21.hostedemail.com: domain of vivek.kasireddy@intel.com designates 192.55.52.93 as permitted sender) smtp.mailfrom=vivek.kasireddy@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692256224; a=rsa-sha256; cv=none; b=DnwQh9US8FpBrdm6h5is0QWcS791wumK149525FiLiQa2HKtHrwQ181mluN+rFibc7t0l3 /MtXPNEjTFx2kMQCMjBitEuMdFKyNzKFBhgfKgcliclSE5bZtR+gGtYREWL+ObOYN47jgS ukFUgDoRI9ZXIw/VlfEzDbT0GBYEX7U= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1692256224; x=1723792224; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=r39C9fURv/tYjjTgfRvVfOiacLkGBXJF65fG2x4l+b0=; b=LV50ghc2KN1YE2DaTgdMjg1NE9M2nZGaivLzlOZ2a/n0nA7OuHEa7eKA ZUBkleY80ArhM53GxUW9rJGUwyPZ7RSHPIuAqeGNVFudxdep3lQGgviB1 Kj2Ceol6ghE4SS+wQ0bblnNsDhazx26E9o8wuMA+j02Ys36RUzOz9fdCp rSQmst6l/WAQmkrjo9nHNmuHI86lXX0yyM9P4fCt9ghRHfipyKuMIIuGV LL0/mFE+3OcEO8nwCXMpmb1lKjK1En9LQGQTRUE/07zelXepi4nEibERv LfcS0zX1WsRHZM2TNy1RO6zMH7rIuhWKu2XYMOmHqZCXdf77TLtoVnXZP Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10803"; a="370200897" X-IronPort-AV: E=Sophos;i="6.01,179,1684825200"; d="scan'208";a="370200897" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Aug 2023 00:10:18 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10803"; a="1065142189" X-IronPort-AV: E=Sophos;i="6.01,179,1684825200"; d="scan'208";a="1065142189" Received: from vkasired-desk2.fm.intel.com ([10.105.128.127]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Aug 2023 00:10:18 -0700 From: Vivek Kasireddy To: dri-devel@lists.freedesktop.org, linux-mm@kvack.org Cc: Vivek Kasireddy , Shuah Khan , David Hildenbrand , Daniel Vetter , Mike Kravetz , Hugh Dickins , Peter Xu , Jason Gunthorpe , Gerd Hoffmann , Dongwon Kim , Junxiao Chang Subject: [PATCH v1 3/3] selftests/dma-buf/udmabuf: Add tests to verify data after page migration Date: Wed, 16 Aug 2023 23:49:34 -0700 Message-Id: <20230817064934.3424431-4-vivek.kasireddy@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230817064934.3424431-1-vivek.kasireddy@intel.com> References: <20230817064934.3424431-1-vivek.kasireddy@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 8EF131C000C X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: nr9dhga59oieooni97d8ou4poo6oonia X-HE-Tag: 1692256224-418608 X-HE-Meta: U2FsdGVkX18QyzgcvkrumPTtHzvugr18Fq1iAh8stZsfqZb0zQOMnlVpNgredTMdjKFUDvWU4eDjA1N5Go2bV5nsNm9yWKTKdrqi/iHmV8GFZXWu+gbrop6I6vJ90qxQOoHHELPicB55PKAnuVGeg3NSH+S1HyHYCWQpzBqTj6cgPnUD5njis1o4UvEE5HNaWm1yldSK0ePvb9kh24mhbNEfX1EvealLu0Ka1YS6heFLqCvQ1asBQEWQovP7vH8T8YeUXl+/+LcRZit9HZ6IoDqqLGCHVe3sO/WyooYKz4B4GN1D2u6zhZ4Y8RZKt07AX4PnqDU8tY4JkX3BwwdOt3TTzLeJ90csd4iGM8caBxE8nD9IYLqLxd1iEzfqBPZsaJakEVldMaHKlzsMX0lK0slOeqkkV62U/K+q1B+f5MntVynSMumxj0a7xUSntpohscOP2b/qExD7qY4gu6qQ143Obtw47VzrfLwP0yUN2kqSYBq8PI4HnLmPk6v2wS4fUfeHSUsOF2O0Su2oCAwYeXUzOzBMdfS0W/ydr2MzHsCFRZk/7hf+rc+b3qrgTajaeORV64Do56P4P/Le2ee8VUQZn7ft929vuBRHIPDgycritJ3Akgm8c6Hda0dxoOje+nzIyWS2+IZYDfGe6Kzulz3jUn3N8a3fyBs7j9si3JUoXugk93JG8KDnXRP/BxzR/YfTyTUj/3QS0Ymbcnh3M6eU4WXRiZQDZ3IqxyN32kyLSDbN8CXMa/6f/zVqvcldI0RtPYiYrBhgbimiTyZuc7j9zSTnODWIlhW7MiETvqNncLtSBk6ZqGH+VZlQnr7tdqcP/M7eTm4gEQdiDIsdq+L7Pkz8IGoRWaDpNxF0SGJ4KYqFV84j6YHTU9snl1HRU2qntqtpfIGu+Cf1rtkYQLFAnU1ZzKpqon2oFvsmKSavuIRvc8bRqwkn6lmq9cJ5s66tx9BATgk2OlOAdIm oIdXvTga ltOnyhIpcjyUX+g01gaPteO070UZDoDKFCcfc3QUjPgQctjMc34uEAjr6wV1dLqEV4klgrG2oUB3hBraKn7036AJXRHBBmCD5Q+0+H1fuRNjsLiLysiwuM1EV7xa3tYxH8sSJb9H/UT+rnLmnN36kGcAvPX2uN6cr0emv/2UKmeAXXPACEO5/cHwhrQu7rTry+IUXBZtpYzGlgY8hllYKYL/qWLHI2p+3ildub/P/TprbcIOuV+Moqp88D1cDuDRJ0wasaAmI6IpZfJNeC3vEChwkzPhdkl4VUuChU9hCwH5FuXAOdw8Zlcj1sw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since the memfd pages associated with a udmabuf may be migrated as part of udmabuf create, we need to verify the data coherency after successful migration. The new tests added in this patch try to do just that using 4k sized pages and also 2 MB sized huge pages for the memfd. Successful completion of the tests would mean that there is no disconnect between the memfd pages and the ones associated with a udmabuf. And, these tests can also be augmented in the future to test newer udmabuf features (such as handling memfd hole punch). Cc: Shuah Khan Cc: David Hildenbrand Cc: Daniel Vetter Cc: Mike Kravetz Cc: Hugh Dickins Cc: Peter Xu Cc: Jason Gunthorpe Cc: Gerd Hoffmann Cc: Dongwon Kim Cc: Junxiao Chang Based-on-patch-by: Mike Kravetz Signed-off-by: Vivek Kasireddy --- .../selftests/drivers/dma-buf/udmabuf.c | 151 +++++++++++++++++- 1 file changed, 147 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/drivers/dma-buf/udmabuf.c b/tools/testing/selftests/drivers/dma-buf/udmabuf.c index c812080e304e..d76c813fe652 100644 --- a/tools/testing/selftests/drivers/dma-buf/udmabuf.c +++ b/tools/testing/selftests/drivers/dma-buf/udmabuf.c @@ -9,26 +9,132 @@ #include #include #include +#include #include #include +#include #include #include #define TEST_PREFIX "drivers/dma-buf/udmabuf" #define NUM_PAGES 4 +#define NUM_ENTRIES 4 +#define MEMFD_SIZE 1024 /* in pages */ -static int memfd_create(const char *name, unsigned int flags) +static unsigned int page_size; + +static int create_memfd_with_seals(off64_t size, bool hpage) +{ + int memfd, ret; + unsigned int flags = MFD_ALLOW_SEALING; + + if (hpage) + flags |= MFD_HUGETLB; + + memfd = memfd_create("udmabuf-test", flags); + if (memfd < 0) { + printf("%s: [skip,no-memfd]\n", TEST_PREFIX); + exit(77); + } + + ret = fcntl(memfd, F_ADD_SEALS, F_SEAL_SHRINK); + if (ret < 0) { + printf("%s: [skip,fcntl-add-seals]\n", TEST_PREFIX); + exit(77); + } + + ret = ftruncate(memfd, size); + if (ret == -1) { + printf("%s: [FAIL,memfd-truncate]\n", TEST_PREFIX); + exit(1); + } + + return memfd; +} + +static int create_udmabuf_list(int devfd, int memfd, off64_t memfd_size) +{ + struct udmabuf_create_list *list; + int ubuf_fd, i; + + list = malloc(sizeof(struct udmabuf_create_list) + + sizeof(struct udmabuf_create_item) * NUM_ENTRIES); + if (!list) { + printf("%s: [FAIL, udmabuf-malloc]\n", TEST_PREFIX); + exit(1); + } + + for (i = 0; i < NUM_ENTRIES; i++) { + list->list[i].memfd = memfd; + list->list[i].offset = i * (memfd_size / NUM_ENTRIES); + list->list[i].size = getpagesize() * NUM_PAGES; + } + + list->count = NUM_ENTRIES; + list->flags = UDMABUF_FLAGS_CLOEXEC; + ubuf_fd = ioctl(devfd, UDMABUF_CREATE_LIST, list); + free(list); + if (ubuf_fd < 0) { + printf("%s: [FAIL, udmabuf-create]\n", TEST_PREFIX); + exit(1); + } + + return ubuf_fd; +} + +static void write_to_memfd(void *addr, off64_t size, char chr) +{ + int i; + + for (i = 0; i < size / page_size; i++) { + *((char *)addr + (i * page_size)) = chr; + } +} + +static void *mmap_fd(int fd, off64_t size) { - return syscall(__NR_memfd_create, name, flags); + void *addr; + + addr = mmap(NULL, size, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0); + if (addr == MAP_FAILED) { + printf("%s: ubuf_fd mmap fail\n", TEST_PREFIX); + exit(1); + } + + return addr; +} + +static int compare_chunks(void *addr1, void *addr2, off64_t memfd_size) +{ + off64_t off; + int i = 0, j, k = 0, ret = 0; + char char1, char2; + + while (i < NUM_ENTRIES) { + off = i * (memfd_size / NUM_ENTRIES); + for (j = 0; j < NUM_PAGES; j++, k++) { + char1 = *((char *)addr1 + off + (j * getpagesize())); + char2 = *((char *)addr2 + (k * getpagesize())); + if (char1 != char2) { + ret = -1; + goto err; + } + } + i++; + } +err: + munmap(addr1, memfd_size); + munmap(addr2, NUM_ENTRIES * NUM_PAGES * getpagesize()); + return ret; } int main(int argc, char *argv[]) { struct udmabuf_create create; int devfd, memfd, buf, ret; - off_t size; - void *mem; + off64_t size; + void *addr1, *addr2; devfd = open("/dev/udmabuf", O_RDWR); if (devfd < 0) { @@ -90,6 +196,9 @@ int main(int argc, char *argv[]) } /* should work */ + page_size = getpagesize(); + addr1 = mmap_fd(memfd, size); + write_to_memfd(addr1, size, 'a'); create.memfd = memfd; create.offset = 0; create.size = size; @@ -98,6 +207,40 @@ int main(int argc, char *argv[]) printf("%s: [FAIL,test-4]\n", TEST_PREFIX); exit(1); } + munmap(addr1, size); + close(buf); + close(memfd); + + /* should work (migration of 4k size pages)*/ + size = MEMFD_SIZE * page_size; + memfd = create_memfd_with_seals(size, false); + addr1 = mmap_fd(memfd, size); + write_to_memfd(addr1, size, 'a'); + buf = create_udmabuf_list(devfd, memfd, size); + addr2 = mmap_fd(buf, NUM_PAGES * NUM_ENTRIES * getpagesize()); + write_to_memfd(addr1, size, 'b'); + ret = compare_chunks(addr1, addr2, size); + if (ret < 0) { + printf("%s: [FAIL,test-5]\n", TEST_PREFIX); + exit(1); + } + close(buf); + close(memfd); + + /* should work (migration of 2MB size huge pages)*/ + page_size = getpagesize() * 512; /* 2 MB */ + size = MEMFD_SIZE * page_size; + memfd = create_memfd_with_seals(size, true); + addr1 = mmap_fd(memfd, size); + write_to_memfd(addr1, size, 'a'); + buf = create_udmabuf_list(devfd, memfd, size); + addr2 = mmap_fd(buf, NUM_PAGES * NUM_ENTRIES * getpagesize()); + write_to_memfd(addr1, size, 'b'); + ret = compare_chunks(addr1, addr2, size); + if (ret < 0) { + printf("%s: [FAIL,test-6]\n", TEST_PREFIX); + exit(1); + } fprintf(stderr, "%s: ok\n", TEST_PREFIX); close(buf);