From patchwork Fri Apr 18 05:47:26 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harshvardhan Jha X-Patchwork-Id: 14056725 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EEED2C369CF for ; Fri, 18 Apr 2025 05:48:26 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.958692.1351389 (Exim 4.92) (envelope-from ) id 1u5eaG-000453-1d; Fri, 18 Apr 2025 05:48:16 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 958692.1351389; Fri, 18 Apr 2025 05:48:16 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1u5eaF-00044w-V3; Fri, 18 Apr 2025 05:48:15 +0000 Received: by outflank-mailman (input) for mailman id 958692; Fri, 18 Apr 2025 05:48:14 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1u5eaE-0003qf-EZ for xen-devel@lists.xenproject.org; Fri, 18 Apr 2025 05:48:14 +0000 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id b12d8111-1c18-11f0-9ffb-bf95429c2676; Fri, 18 Apr 2025 07:48:04 +0200 (CEST) Received: from pps.filterd (m0333521.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 53I5MxXv023787; Fri, 18 Apr 2025 05:47:55 GMT Received: from phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta03.appoci.oracle.com [138.1.37.129]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 4619447jrc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 18 Apr 2025 05:47:54 +0000 (GMT) Received: from pps.filterd (phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (8.18.1.2/8.18.1.2) with ESMTP id 53I4UEYB024679; Fri, 18 Apr 2025 05:47:54 GMT Received: from pps.reinject (localhost [127.0.0.1]) by phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id 460d54fy9r-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 18 Apr 2025 05:47:54 +0000 Received: from phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 53I5lq6h036462; Fri, 18 Apr 2025 05:47:53 GMT Received: from ca-dev112.us.oracle.com (ca-dev112.us.oracle.com [10.129.136.47]) by phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTP id 460d54fy32-2; Fri, 18 Apr 2025 05:47:53 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b12d8111-1c18-11f0-9ffb-bf95429c2676 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=cc :content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=corp-2023-11-20; bh=irwzx ZD0HzBVqkU/MQIM1wQYaguZ/AC//3ZStlzY14I=; b=DlWxLJiDWFf3wCP4oD7uA bqGhungtLYoEJCli8eorjDwlca9IOowbQlesx0w3lWV4cUaqSPpZchckrjTNyUfh 83/GkdJYHOFO+dpzJcP2fBtIXhJS/vjUMD7JvyEVfxRR/LPikdVJuDTxoJw60tcG CQ+USPLBof/t2fQ3b+BA6i38bPDdkBuZzxyfwsIq+BzHs6c1AZeU+Nxtgwk5lXhk WVuBxMrlKg1bco7VE3gfWpELHIBElEJ+oI/+eMiTskHR9luX4iwaak5k4dvTdOWv oiAp2uvkqyUXuBUOxRBw+BYXFSt0+yUgVjfBupC9TAVKQ/S6O0/ka9owWu16gSMZ g== From: Harshvardhan Jha To: jgross@suse.com, sstabellini@kernel.org, boris.ostrovsky@oracle.com Cc: harshvardhan.j.jha@oracle.com, xen-devel@lists.xenproject.org, iommu@lists.linux-foundation.org, iommu@lists.linux.dev, stable@vger.kernel.org Subject: [PATCH 5.15.y v2 1/1] xen/swiotlb: relax alignment requirements Date: Thu, 17 Apr 2025 22:47:26 -0700 Message-ID: <20250418054726.2442674-2-harshvardhan.j.jha@oracle.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250418054726.2442674-1-harshvardhan.j.jha@oracle.com> References: <20250418054726.2442674-1-harshvardhan.j.jha@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1095,Hydra:6.0.680,FMLib:17.12.68.34 definitions=2025-04-18_02,2025-04-17_01,2024-11-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 phishscore=0 mlxlogscore=999 bulkscore=0 suspectscore=0 mlxscore=0 malwarescore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2502280000 definitions=main-2504180041 X-Proofpoint-GUID: 3CLmdghGT8wzTSR39WI-I_oL_wZxa8jx X-Proofpoint-ORIG-GUID: 3CLmdghGT8wzTSR39WI-I_oL_wZxa8jx [ Upstream commit 85fcb57c983f423180ba6ec5d0034242da05cc54 ] When mapping a buffer for DMA via .map_page or .map_sg DMA operations, there is no need to check the machine frames to be aligned according to the mapped areas size. All what is needed in these cases is that the buffer is contiguous at machine level. So carve out the alignment check from range_straddles_page_boundary() and move it to a helper called by xen_swiotlb_alloc_coherent() and xen_swiotlb_free_coherent() directly. Fixes: 9f40ec84a797 ("xen/swiotlb: add alignment check for dma buffers") Signed-off-by: Harshvardhan Jha --- drivers/xen/swiotlb-xen.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 0392841a822fa..65da97be06285 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -75,19 +75,21 @@ static inline phys_addr_t xen_dma_to_phys(struct device *dev, return xen_bus_to_phys(dev, dma_to_phys(dev, dma_addr)); } +static inline bool range_requires_alignment(phys_addr_t p, size_t size) +{ + phys_addr_t algn = 1ULL << (get_order(size) + PAGE_SHIFT); + phys_addr_t bus_addr = pfn_to_bfn(XEN_PFN_DOWN(p)) << XEN_PAGE_SHIFT; + + return IS_ALIGNED(p, algn) && !IS_ALIGNED(bus_addr, algn); +} + static inline int range_straddles_page_boundary(phys_addr_t p, size_t size) { unsigned long next_bfn, xen_pfn = XEN_PFN_DOWN(p); unsigned int i, nr_pages = XEN_PFN_UP(xen_offset_in_page(p) + size); - phys_addr_t algn = 1ULL << (get_order(size) + PAGE_SHIFT); next_bfn = pfn_to_bfn(xen_pfn); - /* If buffer is physically aligned, ensure DMA alignment. */ - if (IS_ALIGNED(p, algn) && - !IS_ALIGNED((phys_addr_t)next_bfn << XEN_PAGE_SHIFT, algn)) - return 1; - for (i = 1; i < nr_pages; i++) if (pfn_to_bfn(++xen_pfn) != ++next_bfn) return 1; @@ -306,7 +308,8 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size, phys = dma_to_phys(hwdev, *dma_handle); dev_addr = xen_phys_to_dma(hwdev, phys); if (((dev_addr + size - 1 <= dma_mask)) && - !range_straddles_page_boundary(phys, size)) + !range_straddles_page_boundary(phys, size) && + !range_requires_alignment(phys, size)) *dma_handle = dev_addr; else { if (xen_create_contiguous_region(phys, order, @@ -347,6 +350,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr, if (!WARN_ON((dev_addr + size - 1 > dma_mask) || range_straddles_page_boundary(phys, size)) && + !range_requires_alignment(phys, size) && TestClearPageXenRemapped(page)) xen_destroy_contiguous_region(phys, order);