From patchwork Mon May 10 09:50:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12247387 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9181C43461 for ; Mon, 10 May 2021 09:52:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B09B261492 for ; Mon, 10 May 2021 09:52:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231144AbhEJJxo (ORCPT ); Mon, 10 May 2021 05:53:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58128 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231253AbhEJJxT (ORCPT ); Mon, 10 May 2021 05:53:19 -0400 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D37DEC061574 for ; Mon, 10 May 2021 02:52:06 -0700 (PDT) Received: by mail-pj1-x1033.google.com with SMTP id gc22-20020a17090b3116b02901558435aec1so9991967pjb.4 for ; Mon, 10 May 2021 02:52:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4pOFRke/Q7tZST7mlvM0MmSbL4aoYUvXn+1GR7SUJ34=; b=A7U0jjyTG8tgDp385jacEJGLNC4oL7IUZlzoTWyLzKk6RkDKUQ5v+SD+9WKkocmvjG LUaefVuL0kRQGj7dkHAfhhjPa4I0Jfm8PqMLbRTvg4D7h7o8M7PlgtpcXiQ1iqPHQ0aU +IJlZ9Uvbxgm75hG8lTxaMBiVB4c7B5X4T9+c= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4pOFRke/Q7tZST7mlvM0MmSbL4aoYUvXn+1GR7SUJ34=; b=aAjJ0FBZm5L0mkaTbEeTRUO9bfM4NmdiDEIXKzeVczI3831teDXSlgRiEeD2szgURb TJntn35Ybn9ARXa7e4rh7h6HzVUgRpNGAmTCGUBnDXF3M1pEEg8+JjnqJrk3oiMXdyAv gLcqu2jXweOsq57yuZm9O4e22kil511p/LCAeurj6oGsGg5VHEtu0aCNWmIrdy3D0xG3 IKLX0nuTe07IaR4wgYbbYfrOohqzoUwDZ2EX3rQCxUCqAnL3CZJb/2wBASl30/+MBkfG V0i3rkWe/ocOrl3Vv9rxnGF2zvIXFrihWTywAhOw5khqKuJpV4YwBN4qCsgxbXzW6suB k5YA== X-Gm-Message-State: AOAM532aDTPT/o/sWl4YJz4/NE2/eUUVIdRnnjwDkP007n3nkEffBgvT Jnwa2vnGv2hUmUHocyhtEjeS8g== X-Google-Smtp-Source: ABdhPJyNhiNJqKO//YUtjfv5tp68csdaR9ELLg7UNG8ZjStHGvhbFTS+EdG5CKcr3bxZkctUA/Q/rQ== X-Received: by 2002:a17:90a:246:: with SMTP id t6mr27380633pje.228.1620640326419; Mon, 10 May 2021 02:52:06 -0700 (PDT) Received: from localhost ([2401:fa00:95:205:a524:abe8:94e3:5601]) by smtp.gmail.com with UTF8SMTPSA id q11sm11036253pjq.6.2021.05.10.02.51.59 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 10 May 2021 02:52:06 -0700 (PDT) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, tientzu@chromium.org, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, nouveau@lists.freedesktop.org, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: [PATCH v6 10/15] swiotlb: Refactor swiotlb_tbl_unmap_single Date: Mon, 10 May 2021 17:50:21 +0800 Message-Id: <20210510095026.3477496-11-tientzu@chromium.org> X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog In-Reply-To: <20210510095026.3477496-1-tientzu@chromium.org> References: <20210510095026.3477496-1-tientzu@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Add a new function, release_slots, to make the code reusable for supporting different bounce buffer pools, e.g. restricted DMA pool. Signed-off-by: Claire Chang --- kernel/dma/swiotlb.c | 35 ++++++++++++++++++++--------------- 1 file changed, 20 insertions(+), 15 deletions(-) diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index ef04d8f7708f..fa11787e4b95 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -550,27 +550,15 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, return tlb_addr; } -/* - * tlb_addr is the physical address of the bounce buffer to unmap. - */ -void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, - size_t mapping_size, enum dma_data_direction dir, - unsigned long attrs) +static void release_slots(struct device *dev, phys_addr_t tlb_addr) { - struct io_tlb_mem *mem = get_io_tlb_mem(hwdev); + struct io_tlb_mem *mem = get_io_tlb_mem(dev); unsigned long flags; - unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr); + unsigned int offset = swiotlb_align_offset(dev, tlb_addr); int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT; int nslots = nr_slots(mem->slots[index].alloc_size + offset); int count, i; - /* - * First, sync the memory before unmapping the entry - */ - if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && - (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL)) - swiotlb_bounce(hwdev, tlb_addr, mapping_size, DMA_FROM_DEVICE); - /* * Return the buffer to the free list by setting the corresponding * entries to indicate the number of contiguous entries available. @@ -605,6 +593,23 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, spin_unlock_irqrestore(&mem->lock, flags); } +/* + * tlb_addr is the physical address of the bounce buffer to unmap. + */ +void swiotlb_tbl_unmap_single(struct device *dev, phys_addr_t tlb_addr, + size_t mapping_size, enum dma_data_direction dir, + unsigned long attrs) +{ + /* + * First, sync the memory before unmapping the entry + */ + if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && + (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL)) + swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_FROM_DEVICE); + + release_slots(dev, tlb_addr); +} + void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr, size_t size, enum dma_data_direction dir) {