From patchwork Thu Oct 24 07:24:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bingbu Cao X-Patchwork-Id: 13848466 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 16984198A30 for ; Thu, 24 Oct 2024 07:28:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729754892; cv=none; b=EpCA9wRVFxTVMcnB405f5TBzoHkMDKXcnyc0Vqz+IQIPyQDmZ3ZuJNNRPpubpAIPBcb5/Da/8G/2R0VP3IeTWpkbmMwIlJRjAnrQEkZm9dGKSh4WutzAdCAU2a2hra7g6E8GxjmQzjZbxcZyB3LKi3SrSKlgy67C5Fmdfxz+2pk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729754892; c=relaxed/simple; bh=Dv4YaQ6bEdJOMxKQrTsyBAiq0FkBprY/Seza0jFThRc=; h=From:To:Cc:Subject:Date:Message-Id; b=l1tGExBn4ueCzRWZLKefVOtRCv2VmZc98TuI/mDtDBLDqRSmkM4cf+aUOcin8OBDOqNQHj84FOb1iH44Zw71Qqon4CQCWV5Jqq/teGvxkGXKzHb1N1QlQhcVQgW23znv+r2KOGozpELv3XO1ryHgld2YDyT7sFob3kSakKqPA20= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=kvZCe25P; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="kvZCe25P" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1729754890; x=1761290890; h=from:to:cc:subject:date:message-id; bh=Dv4YaQ6bEdJOMxKQrTsyBAiq0FkBprY/Seza0jFThRc=; b=kvZCe25PYgA6PC4/d/xAf7BYeNq+ABAjoPTdhbMd2rk/gpvgifFV+4DD jD7xNmsDh2hYFZCk09YCV70/SOAihrXNhXeL2L9aBiDZtx9GINUrxdMUa +VGA6tZjkXxaBfhfGwaxchOCxz+N7kyYoCbStCx8KWYtEBDz5fYFRSEWr QfcQjafqlLCbteGUyhHT+b9m/812oy+Qes2/xi8xfpgvIrMikXrb+O12R vPkHBg2pZBqXPBPgc6UvuoRMEDJuoj4j2mu6Is30KWOpYzv6Vb4rxiwOt 8xDQ+0r45VMsTEpnSqTpWp1C4O4Qf+hFV59b54TxhfUhVJ5jIurnmleG6 g==; X-CSE-ConnectionGUID: qgPva+eCSKCiUpj9whBooA== X-CSE-MsgGUID: jHGKZ2VDSsmUfGSC7VQmxA== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="39913993" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="39913993" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2024 00:27:59 -0700 X-CSE-ConnectionGUID: Au/rDXliT16MBeayGQ8hhw== X-CSE-MsgGUID: YkdGlBZ6T36lGt913Q6QBw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,228,1725346800"; d="scan'208";a="103826729" Received: from ipu5-build.bj.intel.com ([10.238.232.136]) by fmviesa002.fm.intel.com with ESMTP; 24 Oct 2024 00:27:51 -0700 From: Bingbu Cao To: linux-media@vger.kernel.org, sakari.ailus@linux.intel.com Cc: bingbu.cao@intel.com, bingbu.cao@linux.intel.com Subject: [PATCH 1/4] media: ipu6: move the l2_unmap() up before l2_map() Date: Thu, 24 Oct 2024 15:24:36 +0800 Message-Id: <1729754679-6402-1-git-send-email-bingbu.cao@intel.com> X-Mailer: git-send-email 2.7.4 Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: l2_map() and l2_unmap() are better to be grouped together, and l2_unmap() could also be called in l2_map() for error handling. Signed-off-by: Bingbu Cao Signed-off-by: Jianhui Dai --- drivers/media/pci/intel/ipu6/ipu6-mmu.c | 80 ++++++++++++++++----------------- 1 file changed, 40 insertions(+), 40 deletions(-) diff --git a/drivers/media/pci/intel/ipu6/ipu6-mmu.c b/drivers/media/pci/intel/ipu6/ipu6-mmu.c index c3a20507d6db..9ea6789bca5e 100644 --- a/drivers/media/pci/intel/ipu6/ipu6-mmu.c +++ b/drivers/media/pci/intel/ipu6/ipu6-mmu.c @@ -252,6 +252,46 @@ static u32 *alloc_l2_pt(struct ipu6_mmu_info *mmu_info) return pt; } +static size_t l2_unmap(struct ipu6_mmu_info *mmu_info, unsigned long iova, + phys_addr_t dummy, size_t size) +{ + u32 l1_idx = iova >> ISP_L1PT_SHIFT; + u32 iova_start = iova; + unsigned int l2_idx; + size_t unmapped = 0; + unsigned long flags; + u32 *l2_pt; + + dev_dbg(mmu_info->dev, "unmapping l2 page table for l1 index %u (iova 0x%8.8lx)\n", + l1_idx, iova); + + spin_lock_irqsave(&mmu_info->lock, flags); + if (mmu_info->l1_pt[l1_idx] == mmu_info->dummy_l2_pteval) { + spin_unlock_irqrestore(&mmu_info->lock, flags); + dev_err(mmu_info->dev, + "unmap iova 0x%8.8lx l1 idx %u which was not mapped\n", + iova, l1_idx); + return 0; + } + + for (l2_idx = (iova_start & ISP_L2PT_MASK) >> ISP_L2PT_SHIFT; + (iova_start & ISP_L1PT_MASK) + (l2_idx << ISP_PAGE_SHIFT) + < iova_start + size && l2_idx < ISP_L2PT_PTES; l2_idx++) { + l2_pt = mmu_info->l2_pts[l1_idx]; + dev_dbg(mmu_info->dev, + "unmap l2 index %u with pteval 0x%10.10llx\n", + l2_idx, TBL_PHYS_ADDR(l2_pt[l2_idx])); + l2_pt[l2_idx] = mmu_info->dummy_page_pteval; + + clflush_cache_range((void *)&l2_pt[l2_idx], + sizeof(l2_pt[l2_idx])); + unmapped++; + } + spin_unlock_irqrestore(&mmu_info->lock, flags); + + return unmapped << ISP_PAGE_SHIFT; +} + static int l2_map(struct ipu6_mmu_info *mmu_info, unsigned long iova, phys_addr_t paddr, size_t size) { @@ -336,46 +376,6 @@ static int __ipu6_mmu_map(struct ipu6_mmu_info *mmu_info, unsigned long iova, return l2_map(mmu_info, iova_start, paddr, size); } -static size_t l2_unmap(struct ipu6_mmu_info *mmu_info, unsigned long iova, - phys_addr_t dummy, size_t size) -{ - u32 l1_idx = iova >> ISP_L1PT_SHIFT; - u32 iova_start = iova; - unsigned int l2_idx; - size_t unmapped = 0; - unsigned long flags; - u32 *l2_pt; - - dev_dbg(mmu_info->dev, "unmapping l2 page table for l1 index %u (iova 0x%8.8lx)\n", - l1_idx, iova); - - spin_lock_irqsave(&mmu_info->lock, flags); - if (mmu_info->l1_pt[l1_idx] == mmu_info->dummy_l2_pteval) { - spin_unlock_irqrestore(&mmu_info->lock, flags); - dev_err(mmu_info->dev, - "unmap iova 0x%8.8lx l1 idx %u which was not mapped\n", - iova, l1_idx); - return 0; - } - - for (l2_idx = (iova_start & ISP_L2PT_MASK) >> ISP_L2PT_SHIFT; - (iova_start & ISP_L1PT_MASK) + (l2_idx << ISP_PAGE_SHIFT) - < iova_start + size && l2_idx < ISP_L2PT_PTES; l2_idx++) { - l2_pt = mmu_info->l2_pts[l1_idx]; - dev_dbg(mmu_info->dev, - "unmap l2 index %u with pteval 0x%10.10llx\n", - l2_idx, TBL_PHYS_ADDR(l2_pt[l2_idx])); - l2_pt[l2_idx] = mmu_info->dummy_page_pteval; - - clflush_cache_range((void *)&l2_pt[l2_idx], - sizeof(l2_pt[l2_idx])); - unmapped++; - } - spin_unlock_irqrestore(&mmu_info->lock, flags); - - return unmapped << ISP_PAGE_SHIFT; -} - static size_t __ipu6_mmu_unmap(struct ipu6_mmu_info *mmu_info, unsigned long iova, size_t size) { From patchwork Thu Oct 24 07:24:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bingbu Cao X-Patchwork-Id: 13848463 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8A758166F14 for ; Thu, 24 Oct 2024 07:28:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729754888; cv=none; b=shrIJ5BstA3cJyEt3mFR1TFshkGuKIk6lJnDOsQ9P1tSW1w9mRxq0c6lQPTMhilUlYBjlAANZiig7PdMxFxZxba8aj4bQlv4c56GlgKheUJfNEUSdhhm3frqrIWNjGtOGro4v/q/RZqM94hKwX1ckXdue2aIrjLvp1P/xCH68/k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729754888; c=relaxed/simple; bh=qBnt9agVmzFcipNsdMc0yoEhzAnQMrB0WlPk43NEI2Q=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=ukVqGc0qWFSkssXUvUIJzi+YlgOgQnFVPDduq9eQHszQPCgNycvhFCCW2HxghpNFe5j0n2+zn7KGUM27Bic6Lrzc6uHrFS5w8ZUwUhZfyqbbNwNkN5kAF6rSimHodr7nwL9OWNMkoOdePuTMT/dF9pHvjGDu+zp07a8Ct+MSe7Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=GCiNEpM7; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="GCiNEpM7" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1729754886; x=1761290886; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=qBnt9agVmzFcipNsdMc0yoEhzAnQMrB0WlPk43NEI2Q=; b=GCiNEpM7lbAx2Do4mTZyOcBPPLv63NEy1uF7eTOc42CT4UunLKBEywX/ dZAWt8sisATCz1RbII0qGOUMklTPf9FO4Cs5dFGcefP5u0hVwo5U5flpr 8kvgwVTWKzDUOxp27Lhx3fMohLMeuW+nna4ezeHrfxMwXwzTvPm1BfqMX 2M9RVERVkCWSqJbHILTmZaOgOvglGL8VKWqMJ0jTSMlovxk/1NRRATlza Lgz9LpzniAAf2ESw7B5LYjnWOpffHVk824XrpmFfxvxFDh35TiS9h71fV C3OQPjxtwG3HL0J6DIC1/0xAx9eXn1PkTKNSreL2/onIwURQvwG+/SgtA A==; X-CSE-ConnectionGUID: 0HNFFKvGRJ+ExOxmR8SwFg== X-CSE-MsgGUID: 0kRPJXjIShyzO5Ry3JW25g== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="39913994" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="39913994" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2024 00:27:59 -0700 X-CSE-ConnectionGUID: xHItnWmAT0uNJYGWg2OIyA== X-CSE-MsgGUID: aMUxyskzSSucXOWV/wAUSA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,228,1725346800"; d="scan'208";a="103826735" Received: from ipu5-build.bj.intel.com ([10.238.232.136]) by fmviesa002.fm.intel.com with ESMTP; 24 Oct 2024 00:27:52 -0700 From: Bingbu Cao To: linux-media@vger.kernel.org, sakari.ailus@linux.intel.com Cc: bingbu.cao@intel.com, bingbu.cao@linux.intel.com Subject: [PATCH 2/4] media: ipu6: optimize the IPU6 MMU mapping flow Date: Thu, 24 Oct 2024 15:24:37 +0800 Message-Id: <1729754679-6402-2-git-send-email-bingbu.cao@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1729754679-6402-1-git-send-email-bingbu.cao@intel.com> References: <1729754679-6402-1-git-send-email-bingbu.cao@intel.com> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: ipu6_mmu_map() operated on a per-page basis, it leads frequent spin_lock/unlock() and clflush_cache_range() for each page, it will cause inefficiencies especially when handling dma-bufs with large number of pages. However, the pages are likely concentrated pages by IOMMU DMA driver, IPU MMU driver can map the concentrated pages into less entries in l1 table. This change enhances ipu6_mmu_map() with batching process multiple contiguous pages. It significantly reduces calls for spin_lock/unlock and clflush_cache_range() and improve the performance. Signed-off-by: Bingbu Cao Signed-off-by: Jianhui Dai --- drivers/media/pci/intel/ipu6/ipu6-mmu.c | 144 +++++++++++++++----------------- 1 file changed, 69 insertions(+), 75 deletions(-) diff --git a/drivers/media/pci/intel/ipu6/ipu6-mmu.c b/drivers/media/pci/intel/ipu6/ipu6-mmu.c index 9ea6789bca5e..e957ccb4691d 100644 --- a/drivers/media/pci/intel/ipu6/ipu6-mmu.c +++ b/drivers/media/pci/intel/ipu6/ipu6-mmu.c @@ -295,72 +295,90 @@ static size_t l2_unmap(struct ipu6_mmu_info *mmu_info, unsigned long iova, static int l2_map(struct ipu6_mmu_info *mmu_info, unsigned long iova, phys_addr_t paddr, size_t size) { - u32 l1_idx = iova >> ISP_L1PT_SHIFT; - u32 iova_start = iova; + struct device *dev = mmu_info->dev; + unsigned int l2_entries; u32 *l2_pt, *l2_virt; unsigned int l2_idx; unsigned long flags; + size_t mapped = 0; dma_addr_t dma; u32 l1_entry; - - dev_dbg(mmu_info->dev, - "mapping l2 page table for l1 index %u (iova %8.8x)\n", - l1_idx, (u32)iova); + u32 l1_idx; + int err = 0; spin_lock_irqsave(&mmu_info->lock, flags); - l1_entry = mmu_info->l1_pt[l1_idx]; - if (l1_entry == mmu_info->dummy_l2_pteval) { - l2_virt = mmu_info->l2_pts[l1_idx]; - if (likely(!l2_virt)) { - l2_virt = alloc_l2_pt(mmu_info); - if (!l2_virt) { - spin_unlock_irqrestore(&mmu_info->lock, flags); - return -ENOMEM; - } - } - - dma = map_single(mmu_info, l2_virt); - if (!dma) { - dev_err(mmu_info->dev, "Failed to map l2pt page\n"); - free_page((unsigned long)l2_virt); - spin_unlock_irqrestore(&mmu_info->lock, flags); - return -EINVAL; - } - - l1_entry = dma >> ISP_PADDR_SHIFT; - - dev_dbg(mmu_info->dev, "page for l1_idx %u %p allocated\n", - l1_idx, l2_virt); - mmu_info->l1_pt[l1_idx] = l1_entry; - mmu_info->l2_pts[l1_idx] = l2_virt; - clflush_cache_range((void *)&mmu_info->l1_pt[l1_idx], - sizeof(mmu_info->l1_pt[l1_idx])); - } - - l2_pt = mmu_info->l2_pts[l1_idx]; - - dev_dbg(mmu_info->dev, "l2_pt at %p with dma 0x%x\n", l2_pt, l1_entry); paddr = ALIGN(paddr, ISP_PAGE_SIZE); + for (l1_idx = iova >> ISP_L1PT_SHIFT; + size > 0 && l1_idx < ISP_L1PT_PTES; l1_idx++) { + dev_dbg(dev, + "mapping l2 page table for l1 index %u (iova %8.8x)\n", + l1_idx, (u32)iova); - l2_idx = (iova_start & ISP_L2PT_MASK) >> ISP_L2PT_SHIFT; + l1_entry = mmu_info->l1_pt[l1_idx]; + if (l1_entry == mmu_info->dummy_l2_pteval) { + l2_virt = mmu_info->l2_pts[l1_idx]; + if (likely(!l2_virt)) { + l2_virt = alloc_l2_pt(mmu_info); + if (!l2_virt) { + err = -ENOMEM; + goto error; + } + } - dev_dbg(mmu_info->dev, "l2_idx %u, phys 0x%8.8x\n", l2_idx, - l2_pt[l2_idx]); - if (l2_pt[l2_idx] != mmu_info->dummy_page_pteval) { - spin_unlock_irqrestore(&mmu_info->lock, flags); - return -EINVAL; + dma = map_single(mmu_info, l2_virt); + if (!dma) { + dev_err(dev, "Failed to map l2pt page\n"); + free_page((unsigned long)l2_virt); + err = -EINVAL; + goto error; + } + + l1_entry = dma >> ISP_PADDR_SHIFT; + + dev_dbg(dev, "page for l1_idx %u %p allocated\n", + l1_idx, l2_virt); + mmu_info->l1_pt[l1_idx] = l1_entry; + mmu_info->l2_pts[l1_idx] = l2_virt; + + clflush_cache_range(&mmu_info->l1_pt[l1_idx], + sizeof(mmu_info->l1_pt[l1_idx])); + } + + l2_pt = mmu_info->l2_pts[l1_idx]; + l2_entries = 0; + + for (l2_idx = (iova & ISP_L2PT_MASK) >> ISP_L2PT_SHIFT; + size > 0 && l2_idx < ISP_L2PT_PTES; l2_idx++) { + l2_pt[l2_idx] = paddr >> ISP_PADDR_SHIFT; + + dev_dbg(dev, "l2 index %u mapped as 0x%8.8x\n", l2_idx, + l2_pt[l2_idx]); + + iova += ISP_PAGE_SIZE; + paddr += ISP_PAGE_SIZE; + mapped += ISP_PAGE_SIZE; + size -= ISP_PAGE_SIZE; + + l2_entries++; + } + + WARN_ON_ONCE(!l2_entries); + clflush_cache_range(&l2_pt[l2_idx - l2_entries], + sizeof(l2_pt[0]) * l2_entries); } - l2_pt[l2_idx] = paddr >> ISP_PADDR_SHIFT; - - clflush_cache_range((void *)&l2_pt[l2_idx], sizeof(l2_pt[l2_idx])); spin_unlock_irqrestore(&mmu_info->lock, flags); - dev_dbg(mmu_info->dev, "l2 index %u mapped as 0x%8.8x\n", l2_idx, - l2_pt[l2_idx]); - return 0; + +error: + spin_unlock_irqrestore(&mmu_info->lock, flags); + /* unroll mapping in case something went wrong */ + if (mapped) + l2_unmap(mmu_info, iova - mapped, paddr - mapped, mapped); + + return err; } static int __ipu6_mmu_map(struct ipu6_mmu_info *mmu_info, unsigned long iova, @@ -692,10 +710,7 @@ size_t ipu6_mmu_unmap(struct ipu6_mmu_info *mmu_info, unsigned long iova, int ipu6_mmu_map(struct ipu6_mmu_info *mmu_info, unsigned long iova, phys_addr_t paddr, size_t size) { - unsigned long orig_iova = iova; unsigned int min_pagesz; - size_t orig_size = size; - int ret = 0; if (mmu_info->pgsize_bitmap == 0UL) return -ENODEV; @@ -718,28 +733,7 @@ int ipu6_mmu_map(struct ipu6_mmu_info *mmu_info, unsigned long iova, dev_dbg(mmu_info->dev, "map: iova 0x%lx pa %pa size 0x%zx\n", iova, &paddr, size); - while (size) { - size_t pgsize = ipu6_mmu_pgsize(mmu_info->pgsize_bitmap, - iova | paddr, size); - - dev_dbg(mmu_info->dev, - "mapping: iova 0x%lx pa %pa pgsize 0x%zx\n", - iova, &paddr, pgsize); - - ret = __ipu6_mmu_map(mmu_info, iova, paddr, pgsize); - if (ret) - break; - - iova += pgsize; - paddr += pgsize; - size -= pgsize; - } - - /* unroll mapping in case something went wrong */ - if (ret) - ipu6_mmu_unmap(mmu_info, orig_iova, orig_size - size); - - return ret; + return __ipu6_mmu_map(mmu_info, iova, paddr, size); } static void ipu6_mmu_destroy(struct ipu6_mmu *mmu) From patchwork Thu Oct 24 07:24:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bingbu Cao X-Patchwork-Id: 13848465 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7166918BC13 for ; Thu, 24 Oct 2024 07:28:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729754890; cv=none; b=Ehrt/SObe0eBA8YoQp5WHeJZx/o83hfU5K5VpeZlCkfJfNHD6YAym0DVySJG5J37UF7fD8GmmcwSsKP3EmjFKK6qCkm6z1KQfFZndsmwkLpWLGq/Xr0jvBggbBVeqBOkdxK6Vy0VbTbGxcwqX4OolSLJ7pahXTykWeVwW69H59E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729754890; c=relaxed/simple; bh=DZ+KN4tLHeYCeYjOHSq4rJ2hxBacQWoiaPtX94+Jnuc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=aU8jwBfR+hzGdo/FrUfJK8iQDG0s41fiJ/sjEIlEOwbvKrix2lAk9jpSJfoEb+Qba43n82rANDgRXFC9iqy977gKIHWQnB6eaQwrGbx0zD6GrKD0QgH5LWqv6yPm2ttEolQ9WcXxZWYxiM2kxuT0ujI5lSJv2nr6mm/lMGYoASc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=YzqTSrkT; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="YzqTSrkT" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1729754889; x=1761290889; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=DZ+KN4tLHeYCeYjOHSq4rJ2hxBacQWoiaPtX94+Jnuc=; b=YzqTSrkT1IvOAASMcFk309nK38oC6WmNfz6GmyaxjDSr7U+81maS1mXk 2Y58biHpKLu654wAe6nox7JR2eZqnGfKa8if0NkSnur46CBaEuLRXJexi bmacygAtJOjH5M5zkR6xf9l3uJGD4onhoiC/R5Vrm9UrPEYu7qG2E6nTD N1wfB2utKHbSvlR3fI6sCMjKNF5fMgq2UWGpB/3VYv/RoxqniFtNZAEnW d9SJqlTk+PqsC1aokpkJflxwESaJ9/cDdjguWjrk7W2EpjaPYXq+rxwMt FWMaGS82BIDfzdrmNOuJonjvSdtC0ZKGknICzI+F0tchhIfDqSMLBNfss A==; X-CSE-ConnectionGUID: MyHjwNFTRNW2JQbHDlFoBw== X-CSE-MsgGUID: 2aJ9AqrAQs+fyGodZ61xFA== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="39913995" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="39913995" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2024 00:27:59 -0700 X-CSE-ConnectionGUID: Y5+9UbGTSE6+OMv4zKOeDg== X-CSE-MsgGUID: Fv9iO/z0RqGBZnd/W7W66w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,228,1725346800"; d="scan'208";a="103826738" Received: from ipu5-build.bj.intel.com ([10.238.232.136]) by fmviesa002.fm.intel.com with ESMTP; 24 Oct 2024 00:27:54 -0700 From: Bingbu Cao To: linux-media@vger.kernel.org, sakari.ailus@linux.intel.com Cc: bingbu.cao@intel.com, bingbu.cao@linux.intel.com Subject: [PATCH 3/4] media: ipu6: optimize the IPU6 MMU unmapping flow Date: Thu, 24 Oct 2024 15:24:38 +0800 Message-Id: <1729754679-6402-3-git-send-email-bingbu.cao@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1729754679-6402-1-git-send-email-bingbu.cao@intel.com> References: <1729754679-6402-1-git-send-email-bingbu.cao@intel.com> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: The MMU mapping flow is optimized for improve the performance, the unmapping flow could also be optimized to follow same flow. It also change the ipu6_mmu_unmap() as a void function. Signed-off-by: Bingbu Cao Signed-off-by: Jianhui Dai --- drivers/media/pci/intel/ipu6/ipu6-mmu.c | 97 +++++++++++++++------------------ drivers/media/pci/intel/ipu6/ipu6-mmu.h | 4 +- 2 files changed, 45 insertions(+), 56 deletions(-) diff --git a/drivers/media/pci/intel/ipu6/ipu6-mmu.c b/drivers/media/pci/intel/ipu6/ipu6-mmu.c index e957ccb4691d..d746e42918ae 100644 --- a/drivers/media/pci/intel/ipu6/ipu6-mmu.c +++ b/drivers/media/pci/intel/ipu6/ipu6-mmu.c @@ -252,44 +252,51 @@ static u32 *alloc_l2_pt(struct ipu6_mmu_info *mmu_info) return pt; } -static size_t l2_unmap(struct ipu6_mmu_info *mmu_info, unsigned long iova, - phys_addr_t dummy, size_t size) +static void l2_unmap(struct ipu6_mmu_info *mmu_info, unsigned long iova, + phys_addr_t dummy, size_t size) { - u32 l1_idx = iova >> ISP_L1PT_SHIFT; - u32 iova_start = iova; + unsigned int l2_entries; unsigned int l2_idx; - size_t unmapped = 0; unsigned long flags; + u32 l1_idx; u32 *l2_pt; - dev_dbg(mmu_info->dev, "unmapping l2 page table for l1 index %u (iova 0x%8.8lx)\n", - l1_idx, iova); - spin_lock_irqsave(&mmu_info->lock, flags); - if (mmu_info->l1_pt[l1_idx] == mmu_info->dummy_l2_pteval) { - spin_unlock_irqrestore(&mmu_info->lock, flags); - dev_err(mmu_info->dev, - "unmap iova 0x%8.8lx l1 idx %u which was not mapped\n", - iova, l1_idx); - return 0; - } - - for (l2_idx = (iova_start & ISP_L2PT_MASK) >> ISP_L2PT_SHIFT; - (iova_start & ISP_L1PT_MASK) + (l2_idx << ISP_PAGE_SHIFT) - < iova_start + size && l2_idx < ISP_L2PT_PTES; l2_idx++) { - l2_pt = mmu_info->l2_pts[l1_idx]; + for (l1_idx = iova >> ISP_L1PT_SHIFT; + size > 0 && l1_idx < ISP_L1PT_PTES; l1_idx++) { dev_dbg(mmu_info->dev, - "unmap l2 index %u with pteval 0x%10.10llx\n", - l2_idx, TBL_PHYS_ADDR(l2_pt[l2_idx])); - l2_pt[l2_idx] = mmu_info->dummy_page_pteval; + "unmapping l2 pgtable (l1 index %u (iova 0x%8.8lx))\n", + l1_idx, iova); - clflush_cache_range((void *)&l2_pt[l2_idx], - sizeof(l2_pt[l2_idx])); - unmapped++; + if (mmu_info->l1_pt[l1_idx] == mmu_info->dummy_l2_pteval) { + dev_err(mmu_info->dev, + "unmap not mapped iova 0x%8.8lx l1 index %u\n", + iova, l1_idx); + continue; + } + l2_pt = mmu_info->l2_pts[l1_idx]; + + l2_entries = 0; + for (l2_idx = (iova & ISP_L2PT_MASK) >> ISP_L2PT_SHIFT; + size > 0 && l2_idx < ISP_L2PT_PTES; l2_idx++) { + dev_dbg(mmu_info->dev, + "unmap l2 index %u with pteval 0x%10.10llx\n", + l2_idx, TBL_PHYS_ADDR(l2_pt[l2_idx])); + l2_pt[l2_idx] = mmu_info->dummy_page_pteval; + + iova += ISP_PAGE_SIZE; + size -= ISP_PAGE_SIZE; + + l2_entries++; + } + + WARN_ON_ONCE(!l2_entries); + clflush_cache_range(&l2_pt[l2_idx - l2_entries], + sizeof(l2_pt[0]) * l2_entries); } + + WARN_ON_ONCE(size); spin_unlock_irqrestore(&mmu_info->lock, flags); - - return unmapped << ISP_PAGE_SHIFT; } static int l2_map(struct ipu6_mmu_info *mmu_info, unsigned long iova, @@ -394,8 +401,8 @@ static int __ipu6_mmu_map(struct ipu6_mmu_info *mmu_info, unsigned long iova, return l2_map(mmu_info, iova_start, paddr, size); } -static size_t __ipu6_mmu_unmap(struct ipu6_mmu_info *mmu_info, - unsigned long iova, size_t size) +static void __ipu6_mmu_unmap(struct ipu6_mmu_info *mmu_info, + unsigned long iova, size_t size) { return l2_unmap(mmu_info, iova, 0, size); } @@ -665,12 +672,13 @@ static size_t ipu6_mmu_pgsize(unsigned long pgsize_bitmap, return pgsize; } -size_t ipu6_mmu_unmap(struct ipu6_mmu_info *mmu_info, unsigned long iova, - size_t size) +void ipu6_mmu_unmap(struct ipu6_mmu_info *mmu_info, unsigned long iova, + size_t size) { - size_t unmapped_page, unmapped = 0; unsigned int min_pagesz; + dev_dbg(mmu_info->dev, "unmapping iova 0x%lx size 0x%zx\n", iova, size); + /* find out the minimum page size supported */ min_pagesz = 1 << __ffs(mmu_info->pgsize_bitmap); @@ -682,29 +690,10 @@ size_t ipu6_mmu_unmap(struct ipu6_mmu_info *mmu_info, unsigned long iova, if (!IS_ALIGNED(iova | size, min_pagesz)) { dev_err(NULL, "unaligned: iova 0x%lx size 0x%zx min_pagesz 0x%x\n", iova, size, min_pagesz); - return -EINVAL; + return; } - /* - * Keep iterating until we either unmap 'size' bytes (or more) - * or we hit an area that isn't mapped. - */ - while (unmapped < size) { - size_t pgsize = ipu6_mmu_pgsize(mmu_info->pgsize_bitmap, - iova, size - unmapped); - - unmapped_page = __ipu6_mmu_unmap(mmu_info, iova, pgsize); - if (!unmapped_page) - break; - - dev_dbg(mmu_info->dev, "unmapped: iova 0x%lx size 0x%zx\n", - iova, unmapped_page); - - iova += unmapped_page; - unmapped += unmapped_page; - } - - return unmapped; + return __ipu6_mmu_unmap(mmu_info, iova, size); } int ipu6_mmu_map(struct ipu6_mmu_info *mmu_info, unsigned long iova, diff --git a/drivers/media/pci/intel/ipu6/ipu6-mmu.h b/drivers/media/pci/intel/ipu6/ipu6-mmu.h index 21cdb0f146eb..8e40b4a66d7d 100644 --- a/drivers/media/pci/intel/ipu6/ipu6-mmu.h +++ b/drivers/media/pci/intel/ipu6/ipu6-mmu.h @@ -66,8 +66,8 @@ int ipu6_mmu_hw_init(struct ipu6_mmu *mmu); void ipu6_mmu_hw_cleanup(struct ipu6_mmu *mmu); int ipu6_mmu_map(struct ipu6_mmu_info *mmu_info, unsigned long iova, phys_addr_t paddr, size_t size); -size_t ipu6_mmu_unmap(struct ipu6_mmu_info *mmu_info, unsigned long iova, - size_t size); +void ipu6_mmu_unmap(struct ipu6_mmu_info *mmu_info, unsigned long iova, + size_t size); phys_addr_t ipu6_mmu_iova_to_phys(struct ipu6_mmu_info *mmu_info, dma_addr_t iova); #endif From patchwork Thu Oct 24 07:24:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bingbu Cao X-Patchwork-Id: 13848464 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1774B18BBBA for ; Thu, 24 Oct 2024 07:28:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729754889; cv=none; b=k4m9yA3zDxPqL0eKZS22AKZm6Vu5UdvIIF8yDxW+7CJmCMlQQmQy4GPuUBeAC9cS5FZQevqbsfusgHXCDygjgZSeDGTpjay179TqJJZGnUMejAN/88eZ3/GbXpyBSss1WMZqZnQvM1pS0/uAcFlm2vyv8WP6iyQAcwZawSMFGEY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729754889; c=relaxed/simple; bh=flbnZJoz4+ZPkr8vx2yVXkAbxnPPexVeq7ZZzP4rJ0Y=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=NA/ur/0WZt+88/2wUMZgx5oA4v1RaZbjgOKrLhnNH0blB/0BKpZ9lkEKHJZvR9AgBrjHdYzXPmVp1sHo7p932ZjUBBo7kN1uTn6VEmqy+qE5ptUpEebEH1uAcGRlb6ph23eqzLmW33AgaLT4Z5vioy+WA74w+QO+TgsHvF22D6E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=hLRqIpqy; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="hLRqIpqy" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1729754889; x=1761290889; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=flbnZJoz4+ZPkr8vx2yVXkAbxnPPexVeq7ZZzP4rJ0Y=; b=hLRqIpqyN9R95gg2bblCFyieN8mnOECpZP2nOrPykK02Vj7yKlpbBLfX JmRm9k3UisKOgZUcb6ehHbeDDSvUS85iIoXu74+7ySHtlV/Z5LKiqVMTQ Kxjw2894fHZASsvC39U/Qxdiea7aImW6n+HAJlq/IS7X1TbHYnexYXRuU GFOldlTvAtFSYpVdpxLsEVvupWo2AAM7fJMAy+SOEIrFm0ufAOCEZYGpM 7m6hzAFbpUZv0AL81PGOddSlmnTT73T/CufucWO9baf8Uvtjeo0JS/MSM TKCWwFCMN6r/n5iBDjBErxmzgoFkmKfASxbnHJnnrmJ+YpozU5aJIrCPS A==; X-CSE-ConnectionGUID: IJ823bpIS3OhlhK1e+Zt/w== X-CSE-MsgGUID: qeESxqzuSpGI5VKCC5IPFQ== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="39913996" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="39913996" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2024 00:27:59 -0700 X-CSE-ConnectionGUID: ItLX55IjQzSwsVIYOdaTRA== X-CSE-MsgGUID: 624cNXT+QNCsTLR0hl6VZA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,228,1725346800"; d="scan'208";a="103826746" Received: from ipu5-build.bj.intel.com ([10.238.232.136]) by fmviesa002.fm.intel.com with ESMTP; 24 Oct 2024 00:27:56 -0700 From: Bingbu Cao To: linux-media@vger.kernel.org, sakari.ailus@linux.intel.com Cc: bingbu.cao@intel.com, bingbu.cao@linux.intel.com Subject: [PATCH 4/4] media: ipu6: remove unused ipu6_mmu_pgsize() Date: Thu, 24 Oct 2024 15:24:39 +0800 Message-Id: <1729754679-6402-4-git-send-email-bingbu.cao@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1729754679-6402-1-git-send-email-bingbu.cao@intel.com> References: <1729754679-6402-1-git-send-email-bingbu.cao@intel.com> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: The ipu6_mmu_pgsize() could be removed as driver unload this to the PCI core driver DMA ops instead of handling in IPU6 driver. Signed-off-by: Bingbu Cao --- drivers/media/pci/intel/ipu6/ipu6-mmu.c | 28 ---------------------------- 1 file changed, 28 deletions(-) diff --git a/drivers/media/pci/intel/ipu6/ipu6-mmu.c b/drivers/media/pci/intel/ipu6/ipu6-mmu.c index d746e42918ae..2d9c6fbd5da2 100644 --- a/drivers/media/pci/intel/ipu6/ipu6-mmu.c +++ b/drivers/media/pci/intel/ipu6/ipu6-mmu.c @@ -644,34 +644,6 @@ phys_addr_t ipu6_mmu_iova_to_phys(struct ipu6_mmu_info *mmu_info, return phy_addr; } -static size_t ipu6_mmu_pgsize(unsigned long pgsize_bitmap, - unsigned long addr_merge, size_t size) -{ - unsigned int pgsize_idx; - size_t pgsize; - - /* Max page size that still fits into 'size' */ - pgsize_idx = __fls(size); - - if (likely(addr_merge)) { - /* Max page size allowed by address */ - unsigned int align_pgsize_idx = __ffs(addr_merge); - - pgsize_idx = min(pgsize_idx, align_pgsize_idx); - } - - pgsize = (1UL << (pgsize_idx + 1)) - 1; - pgsize &= pgsize_bitmap; - - WARN_ON(!pgsize); - - /* pick the biggest page */ - pgsize_idx = __fls(pgsize); - pgsize = 1UL << pgsize_idx; - - return pgsize; -} - void ipu6_mmu_unmap(struct ipu6_mmu_info *mmu_info, unsigned long iova, size_t size) {