From patchwork Thu Mar 4 21:38:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hector Martin X-Patchwork-Id: 12116849 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 964A7C433DB for ; Thu, 4 Mar 2021 21:43:42 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1D5D564F84 for ; Thu, 4 Mar 2021 21:43:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1D5D564F84 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=marcan.st Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:Cc:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=tQQM0gvcSPKwoyfS8NwtfqnXInugMt6i17u91zVCYdc=; b=jM52LuPsRVzkkWtjOnUNkASli m1KUlGzEBA/WsWamKrWAdk1uvqW+ewFwAEGjtpBYvkxyP7pkHXg1nGZgX+8J3dScfL2YFUmdaVT/1 NBINYebCKyXwodAzpYPJVNYE1vwZy+yWrWFayj5O5NrBZnOiYWHwZTL1Pxasn1FraA+mt355NrZEt v9AuHNdKc07Eohe1JsVHOiv0//5KLAqjW96RDAeq246HB5XxKbScS6ldPTParm+ZMvygqOZMtPotQ x5K0PdpiC66Jioom7ujvn6fN1JAFPpVDqLAyT1fDvtlc9/iFVQmTf7pa3vpFbTjbtxFU8ZT/OnvwQ 9kNAegKvw==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lHvjO-00AEQK-CT; Thu, 04 Mar 2021 21:42:02 +0000 Received: from marcansoft.com ([212.63.210.85] helo=mail.marcansoft.com) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lHvi7-00ADkS-RK for linux-arm-kernel@lists.infradead.org; Thu, 04 Mar 2021 21:40:47 +0000 Received: from [127.0.0.1] (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: hector@marcansoft.com) by mail.marcansoft.com (Postfix) with ESMTPSA id 7FEAE4261F; Thu, 4 Mar 2021 21:40:31 +0000 (UTC) From: Hector Martin To: linux-arm-kernel@lists.infradead.org Cc: Hector Martin , Marc Zyngier , Rob Herring , Arnd Bergmann , Olof Johansson , Krzysztof Kozlowski , Mark Kettenis , Tony Lindgren , Mohamed Mediouni , Stan Skowronek , Alexander Graf , Will Deacon , Linus Walleij , Mark Rutland , Andy Shevchenko , Greg Kroah-Hartman , Jonathan Corbet , Catalin Marinas , Christoph Hellwig , "David S. Miller" , devicetree@vger.kernel.org, linux-serial@vger.kernel.org, linux-doc@vger.kernel.org, linux-samsung-soc@vger.kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RFT PATCH v3 10/27] docs: driver-api: device-io: Document ioremap() variants & access funcs Date: Fri, 5 Mar 2021 06:38:45 +0900 Message-Id: <20210304213902.83903-11-marcan@marcan.st> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210304213902.83903-1-marcan@marcan.st> References: <20210304213902.83903-1-marcan@marcan.st> MIME-Version: 1.0 X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This documents the newly introduced ioremap_np() along with all the other common ioremap() variants, and some higher-level abstractions available. Signed-off-by: Hector Martin Reviewed-by: Linus Walleij --- Documentation/driver-api/device-io.rst | 218 +++++++++++++++++++++++++ 1 file changed, 218 insertions(+) diff --git a/Documentation/driver-api/device-io.rst b/Documentation/driver-api/device-io.rst index b20864b3ddc7..0e12a1d3592b 100644 --- a/Documentation/driver-api/device-io.rst +++ b/Documentation/driver-api/device-io.rst @@ -284,6 +284,224 @@ insl, insw, insb, outsl, outsw, outsb first byte in the FIFO register corresponds to the first byte in the memory buffer regardless of the architecture. +Device memory mapping modes +=========================== + +Some architectures support multiple modes for mapping device memory. +ioremap_*() variants provide a common abstraction around these +architecture-specific modes, with a shared set of semantics. + +ioremap() is the most common mapping type, and is applicable to typical device +memory (e.g. I/O registers). Other modes can offer weaker or stronger +guarantees, if supported by the architecture. In order from strongest to +weakest, they are as follows: + +ioremap_np() +------------ + +Like ioremap(), but explicitly requests non-posted write semantics. On some +architectures and buses, ioremap() mappings have posted write semantics, which +means that writes can appear to "complete" from the point of view of the +CPU before the written data actually arrives at the target device. Writes are +still ordered with respect to other writes and reads from the same device, but +due to the posted write semantics, this is not the case with respect to other +devices. ioremap_np() explicitly requests non-posted semantics, which means +that the write instruction will not appear to complete until the device has +received (and to some platform-specific extent acknowledged) the written data. + +This mapping mode primarily exists to cater for platforms with bus fabrics that +require this particular mapping mode to work correctly. These platforms set the +``IORESOURCE_MEM_NONPOSTED`` flag for a resource that requires ioremap_np() +semantics and portable drivers should use an abstraction that automatically +selects it where appropriate (see the `Higher-level ioremap abstractions`_ +section below). + +The bare ioremap_np() is only available on some architectures; on others, it +always returns NULL. Drivers should not normally use it, unless they are +platform-specific or they derive benefit from non-posted writes where +supported, and can fall back to ioremap() otherwise. The normal approach to +ensure posted write completion is to do a dummy read after a write as +explained in `Accessing the device`_, which works with ioremap() on all +platforms. + +ioremap_np() should never be used for PCI drivers. PCI memory space writes are +always posted, even on architectures that otherwise implement ioremap_np(). +Using ioremap_np() for PCI BARs will at best result in posted write semantics, +and at worst result in complete breakage. + +Note that non-posted write semantics are orthogonal to CPU-side ordering +guarantees. A CPU may still choose to issue other reads or writes before a +non-posted write instruction retires. See the previous section on MMIO access +functions for details on the CPU side of things. + +ioremap() +--------- + +The default mode, suitable for most memory-mapped devices, e.g. control +registers. Memory mapped using ioremap() has the following characteristics: + +* Uncached - CPU-side caches are bypassed, and all reads and writes are handled + directly by the device +* No speculative operations - the CPU may not issue a read or write to this + memory, unless the instruction that does so has been reached in committed + program flow. +* No reordering - The CPU may not reorder accesses to this memory mapping with + respect to each other. On some architectures, this relies on barriers in + readl_relaxed()/writel_relaxed(). +* No repetition - The CPU may not issue multiple reads or writes for a single + program instruction. +* No write-combining - Each I/O operation results in one discrete read or write + being issued to the device, and multiple writes are not combined into larger + writes. This may or may not be enforced when using __raw I/O accessors or + pointer dereferences. +* Non-executable - The CPU is not allowed to speculate instruction execution + from this memory (it probably goes without saying, but you're also not + allowed to jump into device memory). + +On many platforms and buses (e.g. PCI), writes issued through ioremap() +mappings are posted, which means that the CPU does not wait for the write to +actually reach the target device before retiring the write instruction. + +On many platforms, I/O accesses must be aligned with respect to the access +size; failure to do so will result in an exception or unpredictable results. + +ioremap_uc() +------------ + +ioremap_uc() behaves like ioremap() except that on the x86 architecture without +'PAT' mode, it marks memory as uncached even when the MTRR has designated +it as cacheable, see Documentation/x86/pat.rst. + +This should not be used in portable drivers. + +ioremap_wc() +------------ + +Maps I/O memory as normal memory with write combining. Unlike ioremap(), + +* The CPU may speculatively issue reads from the device that the program + didn't actually execute, and may choose to basically read whatever it wants. +* The CPU may reorder operations as long as the result is consistent from the + program's point of view. +* The CPU may write to the same location multiple times, even when the program + issued a single write. +* The CPU may combine several writes into a single larger write. + +This mode is typically used for video framebuffers, where it can increase +performance of writes. It can also be used for other blocks of memory in +devices (e.g. buffers or shared memory), but care must be taken as accesses are +not guaranteed to be ordered with respect to normal ioremap() MMIO register +accesses without explicit barriers. + +On a PCI bus, it is usually safe to use ioremap_wc() on MMIO areas marked as +``IORESOURCE_PREFETCH``, but it may not be used on those without the flag. +For on-chip devices, there is no corresponding flag, but a driver can use +ioremap_wc() on a device that is known to be safe. + +ioremap_wt() +------------ + +Maps I/O memory as normal memory with write-through caching. Like ioremap_wc(), +but also, + +* The CPU may cache writes issued to and reads from the device, and serve reads + from that cache. + +This mode is sometimes used for video framebuffers, where drivers still expect +writes to reach the device in a timely manner (and not be stuck in the CPU +cache), but reads may be served from the cache for efficiency. However, it is +rarely useful these days, as framebuffer drivers usually perform writes only, +for which ioremap_wc() is more efficient (as it doesn't needlessly trash the +cache). Most drivers should not use this. + +ioremap_cache() +--------------- + +ioremap_cache() effectively maps I/O memory as normal RAM. CPU write-back +caches can be used, and the CPU is free to treat the device as if it were a +block of RAM. This should never be used for device memory which has side +effects of any kind, or which does not return the data previously written on +read. + +It should also not be used for actual RAM, as the returned pointer is an +``__iomem`` token. memremap() can be used for mapping normal RAM that is outside +of the linear kernel memory area to a regular pointer. + +Portable drivers should avoid the use of ioremap_cache(). + +Architecture example +-------------------- + +Here is how the above modes map to memory attribute settings on the ARM64 +architecture: + ++------------------------+--------------------------------------------+ +| API | Memory region type and cacheability | ++------------------------+--------------------------------------------+ +| ioremap_np() | Device-nGnRnE | ++------------------------+--------------------------------------------+ +| ioremap() | Device-nGnRE | ++------------------------+--------------------------------------------+ +| ioremap_uc() | (not implemented) | ++------------------------+--------------------------------------------+ +| ioremap_wc() | Normal-Non Cacheable | ++------------------------+--------------------------------------------+ +| ioremap_wt() | (not implemented; fallback to ioremap) | ++------------------------+--------------------------------------------+ +| ioremap_cache() | Normal-Write-Back Cacheable | ++------------------------+--------------------------------------------+ + +Higher-level ioremap abstractions +================================= + +Instead of using the above raw ioremap() modes, drivers are encouraged to use +higher-level APIs. These APIs may implement platform-specific logic to +automatically choose an appropriate ioremap mode on any given bus, allowing for +a platform-agnostic driver to work on those platforms without any special +cases. At the time of this writing, the following ioremap() wrappers have such +logic: + +devm_ioremap_resource() + + Can automatically select ioremap_np() over ioremap() according to platform + requirements, if the ``IORESOURCE_MEM_NONPOSTED`` flag is set on the struct + resource. Uses devres to automatically unmap the resource when the driver + probe() function fails or a device in unbound from its driver. + + Documented in Documentation/driver-api/driver-model/devres.rst. + +of_address_to_resource() + + Automatically sets the ``IORESOURCE_MEM_NONPOSTED`` flag for platforms that + require non-posted writes for certain buses (see the nonposted-mmio and + posted-mmio device tree properties). + +of_iomap() + + Maps the resource described in a ``reg`` property in the device tree, doing + all required translations. Automatically selects ioremap_np() according to + platform requirements, as above. + +pci_ioremap_bar(), pci_ioremap_wc_bar() + + Maps the resource described in a PCI base address without having to extract + the physical address first. + +pci_iomap(), pci_iomap_wc() + + Like pci_ioremap_bar()/pci_ioremap_bar(), but also works on I/O space when + used together with ioread32()/iowrite32() and similar accessors + +pcim_iomap() + + Like pci_iomap(), but uses devres to automatically unmap the resource when + the driver probe() function fails or a device in unbound from its driver + + Documented in Documentation/driver-api/driver-model/devres.rst. + +Not using these wrappers may make drivers unusable on certain platforms with +stricter rules for mapping I/O memory. + Public Functions Provided =========================