From patchwork Wed May 26 14:47:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Daniel Vetter X-Patchwork-Id: 12281975 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9913C47082 for ; Wed, 26 May 2021 14:47:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A92C0613B5 for ; Wed, 26 May 2021 14:47:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235045AbhEZOtP (ORCPT ); Wed, 26 May 2021 10:49:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37236 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234893AbhEZOtO (ORCPT ); Wed, 26 May 2021 10:49:14 -0400 Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com [IPv6:2a00:1450:4864:20::629]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 69CEBC061574 for ; Wed, 26 May 2021 07:47:43 -0700 (PDT) Received: by mail-ej1-x629.google.com with SMTP id lz27so2855475ejb.11 for ; Wed, 26 May 2021 07:47:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=2pe98/RoIUXzCDVRxl/xmCe1bCIiBUSFO5AAZsF4X4c=; b=h94NIhsp+w3WAlSlG/RIZFeoRUXlWWGh4MUY7MPq33t4R8i542F2d1Bn3c3CbpuLJw 8E9PckNT1KmkLBXPdrMieCm34iPGxD2QNx/vbfa9IJOjBZWzFUNNmPAk3bbtzaVBSS10 M/y2KJxI9YmGDgD9ZllCWYAUqxrmTR6wcWyw0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=2pe98/RoIUXzCDVRxl/xmCe1bCIiBUSFO5AAZsF4X4c=; b=Wv8ziigVn3dx3wQ/TEie7LwX5Gx6kJdTaNfMeQ+gfooOaJaFP7TBi/K16n69xFwAG1 4SxXK1T7MccducnSAL4NyoiDIuL802MiF6yigFShBR1ildkAXkGTlT+/GtWoaUQmqKmv AMRIRWfs+POI7+u0YeH0tK6K57DwNxqFPvs0gihT6HUWC5M+gtDFHOD3IjUMr51w20s9 UZQWtUeAsn4qEm984wPf6k2vCamrTUzdGyF2YgkR4BhALnFAPo83uwjwR0ZE2l8MYXLr 5gvzV89Xtd+KMfmtpJdUjpmVd/kF9hCVkKhAmHMLULPBGg7dOGvli/yHQ4nH6gT4+FSh pZdA== X-Gm-Message-State: AOAM531Vm36M5iRbw9YJWzN5gZsJIc4JFtFKOqIVjrAkiv+6pzr/+alR pnMWeY/L7F2GZ/QOKPl30ufelw== X-Google-Smtp-Source: ABdhPJyZXNX9KlZUymaUExm4VcJW+33SUT6/OqBDRweUqpdn0+dxFRpT1SmpiGNhYHWVqDtPVWG2gw== X-Received: by 2002:a17:906:7696:: with SMTP id o22mr1161838ejm.298.1622040462052; Wed, 26 May 2021 07:47:42 -0700 (PDT) Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa]) by smtp.gmail.com with ESMTPSA id fb19sm10466212ejc.10.2021.05.26.07.47.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 26 May 2021 07:47:41 -0700 (PDT) From: Daniel Vetter To: Intel Graphics Development , DRI Development Cc: Daniel Vetter , =?utf-8?q?Christian_K=C3=B6nig?= , Jason Gunthorpe , Suren Baghdasaryan , Matthew Wilcox , John Stultz , Daniel Vetter , Sumit Semwal , linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 1/3] dma-buf: Require VM_PFNMAP vma for mmap Date: Wed, 26 May 2021 16:47:34 +0200 Message-Id: <20210526144736.3277595-1-daniel.vetter@ffwll.ch> X-Mailer: git-send-email 2.31.0 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org tldr; DMA buffers aren't normal memory, expecting that you can use them like that (like calling get_user_pages works, or that they're accounting like any other normal memory) cannot be guaranteed. Since some userspace only runs on integrated devices, where all buffers are actually all resident system memory, there's a huge temptation to assume that a struct page is always present and useable like for any more pagecache backed mmap. This has the potential to result in a uapi nightmare. To stop this gap require that DMA buffer mmaps are VM_PFNMAP, which blocks get_user_pages and all the other struct page based infrastructure for everyone. In spirit this is the uapi counterpart to the kernel-internal CONFIG_DMABUF_DEBUG. Motivated by a recent patch which wanted to swich the system dma-buf heap to vm_insert_page instead of vm_insert_pfn. v2: Jason brought up that we also want to guarantee that all ptes have the pte_special flag set, to catch fast get_user_pages (on architectures that support this). Allowing VM_MIXEDMAP (like VM_SPECIAL does) would still allow vm_insert_page, but limiting to VM_PFNMAP will catch that. From auditing the various functions to insert pfn pte entires (vm_insert_pfn_prot, remap_pfn_range and all it's callers like dma_mmap_wc) it looks like VM_PFNMAP is already required anyway, so this should be the correct flag to check for. References: https://lore.kernel.org/lkml/CAKMK7uHi+mG0z0HUmNt13QCCvutuRVjpcR0NjRL12k-WbWzkRg@mail.gmail.com/ Acked-by: Christian König Cc: Jason Gunthorpe Cc: Suren Baghdasaryan Cc: Matthew Wilcox Cc: John Stultz Signed-off-by: Daniel Vetter Cc: Sumit Semwal Cc: "Christian König" Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org --- Resending this so I can test the next two patches for vgem/shmem in intel-gfx-ci. Last round failed somehow, but I can't repro that at all locally here. No immediate plans to merge this patch here since ttm isn't addressed yet (and there we have the hugepte issue, for which I don't think we have a clear consensus yet). -Daniel --- drivers/dma-buf/dma-buf.c | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index eadd1eaa2fb5..dda583fb1f03 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -127,6 +127,7 @@ static struct file_system_type dma_buf_fs_type = { static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma) { struct dma_buf *dmabuf; + int ret; if (!is_dma_buf_file(file)) return -EINVAL; @@ -142,7 +143,11 @@ static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma) dmabuf->size >> PAGE_SHIFT) return -EINVAL; - return dmabuf->ops->mmap(dmabuf, vma); + ret = dmabuf->ops->mmap(dmabuf, vma); + + WARN_ON(!(vma->vm_flags & VM_PFNMAP)); + + return ret; } static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence) @@ -1244,6 +1249,8 @@ EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access); int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma, unsigned long pgoff) { + int ret; + if (WARN_ON(!dmabuf || !vma)) return -EINVAL; @@ -1264,7 +1271,11 @@ int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma, vma_set_file(vma, dmabuf->file); vma->vm_pgoff = pgoff; - return dmabuf->ops->mmap(dmabuf, vma); + ret = dmabuf->ops->mmap(dmabuf, vma); + + WARN_ON(!(vma->vm_flags & VM_PFNMAP)); + + return ret; } EXPORT_SYMBOL_GPL(dma_buf_mmap);