From patchwork Tue Dec 3 13:22:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m_=28Intel=29?= X-Patchwork-Id: 11271261 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B91FA930 for ; Tue, 3 Dec 2019 13:23:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 79AE520684 for ; Tue, 3 Dec 2019 13:23:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=shipmail.org header.i=@shipmail.org header.b="dC1MsTLu" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 79AE520684 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shipmail.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A53806B052B; Tue, 3 Dec 2019 08:23:02 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 9644B6B0529; Tue, 3 Dec 2019 08:23:02 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 74D226B052B; Tue, 3 Dec 2019 08:23:02 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0035.hostedemail.com [216.40.44.35]) by kanga.kvack.org (Postfix) with ESMTP id 5DAC06B0527 for ; Tue, 3 Dec 2019 08:23:02 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id CD7E14DB7 for ; Tue, 3 Dec 2019 13:23:01 +0000 (UTC) X-FDA: 76223895762.20.paste24_37436d5a4525b X-Spam-Summary: 2,0,0,7207ecba6d84ee5e,d41d8cd98f00b204,thomas_os@shipmail.org,::linux-kernel@vger.kernel.org:dri-devel@lists.freedesktop.org:pv-drivers@vmware.com:linux-graphics-maintainer@vmware.com:thellstrom@vmware.com:akpm@linux-foundation.org:mhocko@suse.com:willy@infradead.org:kirill.shutemov@linux.intel.com:rcampbell@nvidia.com:jglisse@redhat.com:christian.koenig@amd.com,RULES_HIT:2:41:152:355:379:541:800:960:966:968:973:988:989:1042:1260:1261:1277:1311:1313:1314:1345:1359:1431:1437:1515:1516:1518:1535:1593:1594:1605:1676:1730:1747:1777:1792:2196:2199:2393:2559:2562:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4049:4120:4321:4385:4605:5007:6117:6119:6261:6653:7576:7809:7875:7903:8603:10004:11026:11233:11473:11658:11914:12043:12114:12291:12296:12297:12438:12517:12519:12555:12679:12683:12895:12986:13894:14096:14097:14394:14659:21080:21324:21451:21627:21772:21795:30003:30054:30064,0,RBL:79.136.2.40:@shipmail.org:.lbl8.mailshell.net-62.2.203.100 64.100 .201.201 X-HE-Tag: paste24_37436d5a4525b X-Filterd-Recvd-Size: 9456 Received: from pio-pvt-msa1.bahnhof.se (pio-pvt-msa1.bahnhof.se [79.136.2.40]) by imf15.hostedemail.com (Postfix) with ESMTP for ; Tue, 3 Dec 2019 13:22:59 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by pio-pvt-msa1.bahnhof.se (Postfix) with ESMTP id 7A9263F54F; Tue, 3 Dec 2019 14:22:58 +0100 (CET) Authentication-Results: pio-pvt-msa1.bahnhof.se; dkim=pass (1024-bit key; unprotected) header.d=shipmail.org header.i=@shipmail.org header.b="dC1MsTLu"; dkim-atps=neutral X-Virus-Scanned: Debian amavisd-new at bahnhof.se X-Spam-Flag: NO X-Spam-Score: -2.099 X-Spam-Level: X-Spam-Status: No, score=-2.099 tagged_above=-999 required=6.31 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no Received: from pio-pvt-msa1.bahnhof.se ([127.0.0.1]) by localhost (pio-pvt-msa1.bahnhof.se [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id SyLkyzQfCfQE; Tue, 3 Dec 2019 14:22:52 +0100 (CET) Received: from mail1.shipmail.org (h-205-35.A357.priv.bahnhof.se [155.4.205.35]) (Authenticated sender: mb878879) by pio-pvt-msa1.bahnhof.se (Postfix) with ESMTPA id 0DF9C3F3BB; Tue, 3 Dec 2019 14:22:52 +0100 (CET) Received: from localhost.localdomain.localdomain (h-205-35.A357.priv.bahnhof.se [155.4.205.35]) by mail1.shipmail.org (Postfix) with ESMTPSA id DCBBF362537; Tue, 3 Dec 2019 14:22:49 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=shipmail.org; s=mail; t=1575379369; bh=Si99QK2cbXLDdnv9hJQ8qkIHANlsqR9lX9iKQM7FDlk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dC1MsTLuwVfz/IbaLytrfUi1RaaD3Wd+Ev0HqvO129HNq7WLgqD8ImQDvTLX74Xxd +JZYzvL2ILI0mO5Cjam35SbCZCO0LuydupSD21U/igTe2LHHG9GUnxeEKPykNUQLZl SozyzXp8jksegPGwRwS228tnvmT8i9YDpYX9uqQo= From: =?utf-8?q?Thomas_Hellstr=C3=B6m_=28VMware=29?= To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: pv-drivers@vmware.com, linux-graphics-maintainer@vmware.com, Thomas Hellstrom , Andrew Morton , Michal Hocko , "Matthew Wilcox (Oracle)" , "Kirill A. Shutemov" , Ralph Campbell , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , =?utf-8?q?Christian_?= =?utf-8?q?K=C3=B6nig?= Subject: [PATCH 6/8] drm: Add a drm_get_unmapped_area() helper Date: Tue, 3 Dec 2019 14:22:37 +0100 Message-Id: <20191203132239.5910-7-thomas_os@shipmail.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20191203132239.5910-1-thomas_os@shipmail.org> References: <20191203132239.5910-1-thomas_os@shipmail.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Thomas Hellstrom This helper is used to align user-space buffer object addresses to huge page boundaries, minimizing the chance of alignment mismatch between user-space addresses and physical addresses. Cc: Andrew Morton Cc: Michal Hocko Cc: "Matthew Wilcox (Oracle)" Cc: "Kirill A. Shutemov" Cc: Ralph Campbell Cc: "Jérôme Glisse" Cc: "Christian König" Signed-off-by: Thomas Hellstrom --- drivers/gpu/drm/drm_file.c | 130 +++++++++++++++++++++++++++++++++++++ include/drm/drm_file.h | 5 ++ 2 files changed, 135 insertions(+) diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c index ea34bc991858..e5b4024cd397 100644 --- a/drivers/gpu/drm/drm_file.c +++ b/drivers/gpu/drm/drm_file.c @@ -31,6 +31,8 @@ * OTHER DEALINGS IN THE SOFTWARE. */ +#include + #include #include #include @@ -41,6 +43,7 @@ #include #include #include +#include #include "drm_crtc_internal.h" #include "drm_internal.h" @@ -754,3 +757,130 @@ void drm_send_event(struct drm_device *dev, struct drm_pending_event *e) spin_unlock_irqrestore(&dev->event_lock, irqflags); } EXPORT_SYMBOL(drm_send_event); + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +/* + * drm_addr_inflate() attempts to construct an aligned area by inflating + * the area size and skipping the unaligned start of the area. + * adapted from shmem_get_unmapped_area() + */ +static unsigned long drm_addr_inflate(unsigned long addr, + unsigned long len, + unsigned long pgoff, + unsigned long flags, + unsigned long huge_size) +{ + unsigned long offset, inflated_len; + unsigned long inflated_addr; + unsigned long inflated_offset; + + offset = (pgoff << PAGE_SHIFT) & (huge_size - 1); + if (offset && offset + len < 2 * huge_size) + return addr; + if ((addr & (huge_size - 1)) == offset) + return addr; + + inflated_len = len + huge_size - PAGE_SIZE; + if (inflated_len > TASK_SIZE) + return addr; + if (inflated_len < len) + return addr; + + inflated_addr = current->mm->get_unmapped_area(NULL, 0, inflated_len, + 0, flags); + if (IS_ERR_VALUE(inflated_addr)) + return addr; + if (inflated_addr & ~PAGE_MASK) + return addr; + + inflated_offset = inflated_addr & (huge_size - 1); + inflated_addr += offset - inflated_offset; + if (inflated_offset > offset) + inflated_addr += huge_size; + + if (inflated_addr > TASK_SIZE - len) + return addr; + + return inflated_addr; +} + +/** + * drm_get_unmapped_area() - Get an unused user-space virtual memory area + * suitable for huge page table entries. + * @file: The struct file representing the address space being mmap()'d. + * @uaddr: Start address suggested by user-space. + * @len: Length of the area. + * @pgoff: The page offset into the address space. + * @flags: mmap flags + * @mgr: The address space manager used by the drm driver. This argument can + * probably be removed at some point when all drivers use the same + * address space manager. + * + * This function attempts to find an unused user-space virtual memory area + * that can accommodate the size we want to map, and that is properly + * aligned to facilitate huge page table entries matching actual + * huge pages or huge page aligned memory in buffer objects. Buffer objects + * are assumed to start at huge page boundary pfns (io memory) or be + * populated by huge pages aligned to the start of the buffer object + * (system- or coherent memory). Adapted from shmem_get_unmapped_area. + * + * Return: aligned user-space address. + */ +unsigned long drm_get_unmapped_area(struct file *file, + unsigned long uaddr, unsigned long len, + unsigned long pgoff, unsigned long flags, + struct drm_vma_offset_manager *mgr) +{ + unsigned long addr; + unsigned long inflated_addr; + struct drm_vma_offset_node *node; + + if (len > TASK_SIZE) + return -ENOMEM; + + /* Adjust mapping offset to be zero at bo start */ + drm_vma_offset_lock_lookup(mgr); + node = drm_vma_offset_lookup_locked(mgr, pgoff, 1); + if (node) + pgoff -= node->vm_node.start; + drm_vma_offset_unlock_lookup(mgr); + + addr = current->mm->get_unmapped_area(file, uaddr, len, pgoff, flags); + if (IS_ERR_VALUE(addr)) + return addr; + if (addr & ~PAGE_MASK) + return addr; + if (addr > TASK_SIZE - len) + return addr; + + if (len < HPAGE_PMD_SIZE) + return addr; + if (flags & MAP_FIXED) + return addr; + /* + * Our priority is to support MAP_SHARED mapped hugely; + * and support MAP_PRIVATE mapped hugely too, until it is COWed. + * But if caller specified an address hint, respect that as before. + */ + if (uaddr) + return addr; + + inflated_addr = drm_addr_inflate(addr, len, pgoff, flags, + HPAGE_PMD_SIZE); + + if (IS_ENABLED(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) && + len >= HPAGE_PUD_SIZE) + inflated_addr = drm_addr_inflate(inflated_addr, len, pgoff, + flags, HPAGE_PUD_SIZE); + return inflated_addr; +} +#else /* CONFIG_TRANSPARENT_HUGEPAGE */ +unsigned long drm_get_unmapped_area(struct file *file, + unsigned long uaddr, unsigned long len, + unsigned long pgoff, unsigned long flags, + struct drm_vma_offset_manager *mgr) +{ + return current->mm->get_unmapped_area(file, uaddr, len, pgoff, flags); +} +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ +EXPORT_SYMBOL_GPL(drm_get_unmapped_area); diff --git a/include/drm/drm_file.h b/include/drm/drm_file.h index 67af60bb527a..4719cc80d547 100644 --- a/include/drm/drm_file.h +++ b/include/drm/drm_file.h @@ -386,5 +386,10 @@ void drm_event_cancel_free(struct drm_device *dev, struct drm_pending_event *p); void drm_send_event_locked(struct drm_device *dev, struct drm_pending_event *e); void drm_send_event(struct drm_device *dev, struct drm_pending_event *e); +struct drm_vma_offset_manager; +unsigned long drm_get_unmapped_area(struct file *file, + unsigned long uaddr, unsigned long len, + unsigned long pgoff, unsigned long flags, + struct drm_vma_offset_manager *mgr); #endif /* _DRM_FILE_H_ */