From patchwork Tue Jan 14 08:07:59 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vivek Kasireddy X-Patchwork-Id: 13938519 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 88317C02184 for ; Tue, 14 Jan 2025 08:38:14 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 856AA10E880; Tue, 14 Jan 2025 08:38:12 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="ewhT2JmJ"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) by gabe.freedesktop.org (Postfix) with ESMTPS id 316FB10E449 for ; Tue, 14 Jan 2025 08:38:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736843891; x=1768379891; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=E6TH/lyWtE8ZlFL6RJjSGsS+m2RTQmVTbw55gtCDXtU=; b=ewhT2JmJAMS7Ur+CVL1SSzlnCqK/JiLv0E1TeWsz5ppRFn7pLFJ4zH63 tvfjjv5YJCTkfn9ugo7JKVWTHBBb+mjEYJYTn33F6vqg7Mk97W5LG57WB 67844ODJB1OTsQS+rPR9ad6AoNzfkaQh3gy7Ghigetm60TPJ/WdjRhuzj O3hTfQH/bQjK3lLrqliY5X2eUl0ZVjVKGu/ZfVQ2r9DIyo8wCy7AVdk7w TFv4h4vf98Q+QIKDaTPXNTlIioB0X5qTnjA36pJz+Gnj5l/iVTvxY23FL /OP4woxvlqEoef1XSeHeQbfEmtNsxEudNSpMSMU1nVqyCQ6kBsuOL5gzU g==; X-CSE-ConnectionGUID: FW0UvymFR/KpUaskDwmT6Q== X-CSE-MsgGUID: aJ37affETz6+vjry+firdw== X-IronPort-AV: E=McAfee;i="6700,10204,11314"; a="36418223" X-IronPort-AV: E=Sophos;i="6.12,313,1728975600"; d="scan'208";a="36418223" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Jan 2025 00:38:10 -0800 X-CSE-ConnectionGUID: 135edZcDSea+IBOPUJPJoQ== X-CSE-MsgGUID: SletGuQ7Q7Cajz7v6XjaUg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,313,1728975600"; d="scan'208";a="105251575" Received: from vkasired-desk2.fm.intel.com ([10.105.128.132]) by fmviesa010-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Jan 2025 00:38:10 -0800 From: Vivek Kasireddy To: dri-devel@lists.freedesktop.org, linux-mm@kvack.org Cc: Vivek Kasireddy , Gerd Hoffmann , Steve Sistare , Muchun Song , David Hildenbrand , Andrew Morton Subject: [PATCH v2 0/2] mm/memfd: reserve hugetlb folios before allocation Date: Tue, 14 Jan 2025 00:07:59 -0800 Message-ID: <20250114080927.2616684-1-vivek.kasireddy@intel.com> X-Mailer: git-send-email 2.47.1 MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" There are cases when we try to pin a folio but discover that it has not been faulted-in. So, we try to allocate it in memfd_alloc_folio() but there is a chance that we might encounter a crash/failure (VM_BUG_ON(!h->resv_huge_pages)) if there are no active reservations at that instant. This issue was reported by syzbot. Therefore, to avoid this situation and fix this issue, we just need to make a reservation (by calling hugetlb_reserve_pages()) before we try to allocate the folio. This will ensure that we are properly doing region/subpool accounting associated with our allocation. ----------------------------- Patchset overview: Patch 1: Fix for VM_BUG_ON(!h->resv_huge_pages) crash reported by syzbot Patch 2: New udmabuf selftest to invoke memfd_alloc_folio() This series is tested by running the new udmabuf selftest introduced in patch #2 along with the other selftests. Changelog: v1 -> v2: - Replace VM_BUG_ON() with WARN_ON_ONCE() in the function alloc_hugetlb_folio_reserve() (David) - Move the inline function subpool_inode() from hugetlb.c into the relevant header (hugetlb.h) - Call hugetlb_unreserve_pages() if the folio cannot be added to the page cache as well - Added a new udmabuf selftest to exercise the same path as that of syzbot Cc: Gerd Hoffmann Cc: Steve Sistare Cc: Muchun Song Cc: David Hildenbrand Cc: Andrew Morton Vivek Kasireddy (2): mm/memfd: reserve hugetlb folios before allocation selftests/udmabuf: add a test to pin first before writing to memfd include/linux/hugetlb.h | 5 +++++ mm/hugetlb.c | 14 ++++++------- mm/memfd.c | 14 ++++++++++--- .../selftests/drivers/dma-buf/udmabuf.c | 20 ++++++++++++++++++- 4 files changed, 41 insertions(+), 12 deletions(-)