From patchwork Fri Mar 26 05:55:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12165791 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05616C433E5 for ; Fri, 26 Mar 2021 05:55:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6EBE061A3F for ; Fri, 26 Mar 2021 05:55:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6EBE061A3F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C8FF86B006E; Fri, 26 Mar 2021 01:55:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AF70F6B0074; Fri, 26 Mar 2021 01:55:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9E2216B006E; Fri, 26 Mar 2021 01:55:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0082.hostedemail.com [216.40.44.82]) by kanga.kvack.org (Postfix) with ESMTP id 70CFA6B006E for ; Fri, 26 Mar 2021 01:55:30 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 3484A824999B for ; Fri, 26 Mar 2021 05:55:30 +0000 (UTC) X-FDA: 77960963220.02.800D8A5 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf08.hostedemail.com (Postfix) with ESMTP id 3B83B80192D5 for ; Fri, 26 Mar 2021 05:55:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=mWIEkU6jedu0G94rekUFSLElVM7olvsImROH2Stq5bI=; b=g2JiuWrlTQL+wDaNjKbfVftIYn KYuF8GUcf1NNzam15/Hjcm4/lFevXxGjBp2MNs+bKZLSSDAeKsYMkOSdLiyZhTDehGHw/dnJEGEl/ iPo4bHIPi8+baHEmkcDSBLTm5UGQUksu1/BV/nAJBi0u3niMUHWbDM4FYR0xArYzgy9ZW1cA0QVoe fsb5zJdckcdMACC8T7Y+dx7ZHt5oyIkme8fq6pz8DnIN7yF6lMtUtfJhoXDIzU0xtwaDGVYLJ7GhX c0TX9PnZARW6FEJpsUBSbG+lx16b+nyaT7oUXa+GKl2wVh1ubWq7c41TVt4GbjaAM1ExCY/1MLSef W2ifC6xQ==; Received: from [2001:4bb8:191:f692:97ff:1e47:aee2:c7e5] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1lPfRF-005AI5-D1; Fri, 26 Mar 2021 05:55:17 +0000 From: Christoph Hellwig To: Andrew Morton , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi Cc: Chris Wilson , Daniel Vetter , Peter Zijlstra , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org Subject: [PATCH 4/4] i915: fix remap_io_sg to verify the pgprot Date: Fri, 26 Mar 2021 06:55:05 +0100 Message-Id: <20210326055505.1424432-5-hch@lst.de> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210326055505.1424432-1-hch@lst.de> References: <20210326055505.1424432-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Stat-Signature: 5a7cd7zhrpdf6hzkr9sdc4ankeqdcu37 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 3B83B80192D5 Received-SPF: none (bombadil.srs.infradead.org>: No applicable sender policy available) receiver=imf08; identity=mailfrom; envelope-from=""; helo=bombadil.infradead.org; client-ip=198.137.202.133 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1616738126-733539 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: remap_io_sg claims that that the pgprot is pre-verified using an io_mapping, but actually does not get passed an io_mapping and just uses the pgprot in the VMA. Remove the apply_to_page_range abuse and just loop over remap_pfn_range for each segment. Note: this could use io_mapping_map_user by passing an iomap to remap_io_sg if the maintainers can verify that the pgprot in the iomap in the only caller is indeed the desired one here. Signed-off-by: Christoph Hellwig --- drivers/gpu/drm/i915/i915_mm.c | 73 +++++++++++----------------------- 1 file changed, 23 insertions(+), 50 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_mm.c b/drivers/gpu/drm/i915/i915_mm.c index 9a777b0ff59b05..4c8cd08c672d2d 100644 --- a/drivers/gpu/drm/i915/i915_mm.c +++ b/drivers/gpu/drm/i915/i915_mm.c @@ -28,46 +28,10 @@ #include "i915_drv.h" -struct remap_pfn { - struct mm_struct *mm; - unsigned long pfn; - pgprot_t prot; - - struct sgt_iter sgt; - resource_size_t iobase; -}; +#define EXPECTED_FLAGS (VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP) #define use_dma(io) ((io) != -1) -static inline unsigned long sgt_pfn(const struct remap_pfn *r) -{ - if (use_dma(r->iobase)) - return (r->sgt.dma + r->sgt.curr + r->iobase) >> PAGE_SHIFT; - else - return r->sgt.pfn + (r->sgt.curr >> PAGE_SHIFT); -} - -static int remap_sg(pte_t *pte, unsigned long addr, void *data) -{ - struct remap_pfn *r = data; - - if (GEM_WARN_ON(!r->sgt.sgp)) - return -EINVAL; - - /* Special PTE are not associated with any struct page */ - set_pte_at(r->mm, addr, pte, - pte_mkspecial(pfn_pte(sgt_pfn(r), r->prot))); - r->pfn++; /* track insertions in case we need to unwind later */ - - r->sgt.curr += PAGE_SIZE; - if (r->sgt.curr >= r->sgt.max) - r->sgt = __sgt_iter(__sg_next(r->sgt.sgp), use_dma(r->iobase)); - - return 0; -} - -#define EXPECTED_FLAGS (VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP) - /** * remap_io_sg - remap an IO mapping to userspace * @vma: user vma to map to @@ -82,12 +46,7 @@ int remap_io_sg(struct vm_area_struct *vma, unsigned long addr, unsigned long size, struct scatterlist *sgl, resource_size_t iobase) { - struct remap_pfn r = { - .mm = vma->vm_mm, - .prot = vma->vm_page_prot, - .sgt = __sgt_iter(sgl, use_dma(iobase)), - .iobase = iobase, - }; + unsigned long pfn, len, remapped = 0; int err; /* We rely on prevalidation of the io-mapping to skip track_pfn(). */ @@ -96,11 +55,25 @@ int remap_io_sg(struct vm_area_struct *vma, if (!use_dma(iobase)) flush_cache_range(vma, addr, size); - err = apply_to_page_range(r.mm, addr, size, remap_sg, &r); - if (unlikely(err)) { - zap_vma_ptes(vma, addr, r.pfn << PAGE_SHIFT); - return err; - } - - return 0; + do { + if (use_dma(iobase)) { + if (!sg_dma_len(sgl)) + break; + pfn = (sg_dma_address(sgl) + iobase) >> PAGE_SHIFT; + len = sg_dma_len(sgl); + } else { + pfn = page_to_pfn(sg_page(sgl)); + len = sgl->length; + } + + err = remap_pfn_range(vma, addr + remapped, pfn, len, + vma->vm_page_prot); + if (err) + break; + remapped += len; + } while ((sgl = __sg_next(sgl))); + + if (err) + zap_vma_ptes(vma, addr, remapped); + return err; }