From patchwork Sun Jun 20 12:57:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ojaswin Mujoo X-Patchwork-Id: 12333405 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.7 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3DC9FC48BDF for ; Sun, 20 Jun 2021 12:59:43 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0E6EE610CA for ; Sun, 20 Jun 2021 12:59:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0E6EE610CA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=IzhH0NOfdbsShl7XVc7YUY+QvcfKk6jDaltC8WlluGU=; b=THIoqH2Q0xO0I2 hbcW1SjSZ+oAb9LvkeKzI175C7e7MrIQhZaVhp3ouNamDWlxBcWbvk13yZX45cnjV6TPDI0XRKoNV dIAi9TJnuwN3aY0VXET7ZfFASJ2WZY7hWXbfQc0XklXWck2yJcHAyaYxooFIdPnWXCn1V4cBpStFD TejBbDSwoQU+ui1zfeOwYTURPHSoVChz0yUedEX/P5/OFZMpEOEFan6GW7bBzKK2N83F5LoY7VK1a wMNlxqht2Iv20t4JfSyWAn9l/IxL5InDZyZvKa5yf8AeyURk6dw3B4Y5HX5pmWHffi/AaS8kS2j5w llTaGsdbMJ6OlsJJWvNQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lux1R-0011mP-1t; Sun, 20 Jun 2021 12:57:57 +0000 Received: from mail-pj1-x102d.google.com ([2607:f8b0:4864:20::102d]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1lux19-0011et-6E for linux-arm-kernel@lists.infradead.org; Sun, 20 Jun 2021 12:57:43 +0000 Received: by mail-pj1-x102d.google.com with SMTP id h16so8360108pjv.2 for ; Sun, 20 Jun 2021 05:57:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=bLdCas7X3LkNs4ttDItS1W5jgGf0j45KNW4xOoMdFCs=; b=TND25wscFbac2J933iYeikhbRdw2RUW1/KJgF0tGguL3eytUbWa2uidfjHfTCd6zrj e8F1BZRUbnrd/2Bl3tbOYSvvjHNkQF/eOIy0Z5oO7lztuE2SCmvvrfMCNQqte91oCMyH Wv044OV0q3tOjKJWD7IvPgNua2tLLcH/Ng5tO48x9SDFwkFEaE7eDH78BaePHrd27ESv KeFqO5W8H8T+Mhi26aGLaQSUOCyR0s5DPX2q7kJAiRI41AvOcHs9lQw8wVUqsE2t4jC+ Qz9TuXioZh7P7GfXD0f9fI59mjwNZJM0imeFSxU5k/bRvfRcWLR9fdCOMDSTWf1mA3dr PLig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=bLdCas7X3LkNs4ttDItS1W5jgGf0j45KNW4xOoMdFCs=; b=RYJuz2hQIym3j4g6Cvbw109EhJIgAelQShZYpy3BiQEl/XCKcRVvO43r0/23rWQU2+ XsKrf4LUi9AHP+HP+CFAmmTHVr9hvuLKadC0iHRPsmpjF1Fy6r6K0gYtz7dGQrsRewF1 xD0K9H8q6CndQIro3G6kYdQKNoJsHm3HzfrbHTrgNS3TJ9BJaGDnaZbtsJ9Am34ahO8O wGAgCFP54+h8jCccRHJ+VCKSs3vFTbBkV/+Qhk3pzZnL7APpt1+x6f0IlsvNMHkQ1AvQ uUL94u0nqpgFZoO5/SneHuBFwb1QuDBdqPQqbQOP+3wFIz845mSsznXnXr4uZdeIUjfK 8a/w== X-Gm-Message-State: AOAM532qFNRn+OmuriXUDXv00671rPjrTsgjVilb8wEXnI/0mxQIGTf3 GPelSNIAK+BjUO6+XFwGQYU= X-Google-Smtp-Source: ABdhPJyfLv+oSPcgXygQx51RcC9mQ3b83kNRCDMi0Af0qglylchIZLlbeM1RU/ClG2Ov0FHiwovHzQ== X-Received: by 2002:a17:90a:4404:: with SMTP id s4mr32931717pjg.218.1624193858309; Sun, 20 Jun 2021 05:57:38 -0700 (PDT) Received: from ojas ([122.177.154.120]) by smtp.gmail.com with ESMTPSA id q12sm14236861pgc.25.2021.06.20.05.57.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 20 Jun 2021 05:57:38 -0700 (PDT) Date: Sun, 20 Jun 2021 18:27:25 +0530 From: Ojaswin Mujoo To: nsaenz@kernel.org Cc: gregkh@linuxfoundation.org, stefan.wahren@i2se.com, arnd@arndb.de, dan.carpenter@oracle.com, phil@raspberrypi.com, bcm-kernel-feedback-list@broadcom.com, linux-arm-kernel@lists.infradead.org, linux-staging@lists.linux.dev, linux-kernel@vger.kernel.org Subject: [PATCH v2 5/5] staging: vchiq: Combine vchiq platform code into single file Message-ID: <03145b153c7d1b1a285ccdefb60abc2377dc5558.1624185152.git.ojaswin98@gmail.com> References: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210620_055739_373943_5AAB7CCF X-CRM114-Status: GOOD ( 30.02 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Combine the vchiq platform initialization code into a single file. The file vchiq_2835_arm.c is now merged in vchiq_arm.c Signed-off-by: Ojaswin Mujoo --- drivers/staging/vc04_services/Makefile | 1 - .../interface/vchiq_arm/vchiq_2835_arm.c | 651 ------------------ .../interface/vchiq_arm/vchiq_arm.c | 635 +++++++++++++++++ 3 files changed, 635 insertions(+), 652 deletions(-) delete mode 100644 drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c diff --git a/drivers/staging/vc04_services/Makefile b/drivers/staging/vc04_services/Makefile index cc6371386a62..cc0a81996eda 100644 --- a/drivers/staging/vc04_services/Makefile +++ b/drivers/staging/vc04_services/Makefile @@ -4,7 +4,6 @@ obj-$(CONFIG_BCM2835_VCHIQ) += vchiq.o vchiq-objs := \ interface/vchiq_arm/vchiq_core.o \ interface/vchiq_arm/vchiq_arm.o \ - interface/vchiq_arm/vchiq_2835_arm.o \ interface/vchiq_arm/vchiq_debugfs.o \ interface/vchiq_arm/vchiq_connected.o \ diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c deleted file mode 100644 index 2a1d8d6541b2..000000000000 --- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c +++ /dev/null @@ -1,651 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause -/* Copyright (c) 2010-2012 Broadcom. All rights reserved. */ - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#define TOTAL_SLOTS (VCHIQ_SLOT_ZERO_SLOTS + 2 * 32) - -#include "vchiq_arm.h" -#include "vchiq_connected.h" -#include "vchiq_pagelist.h" - -#define MAX_FRAGMENTS (VCHIQ_NUM_CURRENT_BULKS * 2) - -#define VCHIQ_PLATFORM_FRAGMENTS_OFFSET_IDX 0 -#define VCHIQ_PLATFORM_FRAGMENTS_COUNT_IDX 1 - -#define BELL0 0x00 -#define BELL2 0x08 - -#define VCHIQ_DMA_POOL_SIZE PAGE_SIZE - -struct vchiq_2835_state { - int inited; - struct vchiq_arm_state arm_state; -}; - -struct vchiq_pagelist_info { - struct pagelist *pagelist; - size_t pagelist_buffer_size; - dma_addr_t dma_addr; - bool is_from_pool; - enum dma_data_direction dma_dir; - unsigned int num_pages; - unsigned int pages_need_release; - struct page **pages; - struct scatterlist *scatterlist; - unsigned int scatterlist_mapped; -}; - -static void __iomem *g_regs; -/* This value is the size of the L2 cache lines as understood by the - * VPU firmware, which determines the required alignment of the - * offsets/sizes in pagelists. - * - * Modern VPU firmware looks for a DT "cache-line-size" property in - * the VCHIQ node and will overwrite it with the actual L2 cache size, - * which the kernel must then respect. That property was rejected - * upstream, so we have to use the VPU firmware's compatibility value - * of 32. - */ -static unsigned int g_cache_line_size = 32; -static struct dma_pool *g_dma_pool; -static unsigned int g_use_36bit_addrs = 0; -static unsigned int g_fragments_size; -static char *g_fragments_base; -static char *g_free_fragments; -static struct semaphore g_free_fragments_sema; -static struct device *g_dev; -static struct device *g_dma_dev; - -static DEFINE_SEMAPHORE(g_free_fragments_mutex); - -static irqreturn_t -vchiq_doorbell_irq(int irq, void *dev_id); - -static struct vchiq_pagelist_info * -create_pagelist(char *buf, char __user *ubuf, size_t count, unsigned short type); - -static void -free_pagelist(struct vchiq_pagelist_info *pagelistinfo, - int actual); - -int vchiq_platform_init(struct platform_device *pdev, struct vchiq_state *state) -{ - struct device *dev = &pdev->dev; - struct device *dma_dev = NULL; - struct vchiq_drvdata *drvdata = platform_get_drvdata(pdev); - struct rpi_firmware *fw = drvdata->fw; - struct vchiq_slot_zero *vchiq_slot_zero; - void *slot_mem; - dma_addr_t slot_phys; - u32 channelbase; - int slot_mem_size, frag_mem_size; - int err, irq, i; - - /* - * VCHI messages between the CPU and firmware use - * 32-bit bus addresses. - */ - err = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32)); - - if (err < 0) - return err; - - g_cache_line_size = drvdata->cache_line_size; - g_fragments_size = 2 * g_cache_line_size; - - if (drvdata->use_36bit_addrs) { - struct device_node *dma_node = - of_find_compatible_node(NULL, NULL, "brcm,bcm2711-dma"); - - if (dma_node) { - struct platform_device *pdev; - - pdev = of_find_device_by_node(dma_node); - if (pdev) - dma_dev = &pdev->dev; - of_node_put(dma_node); - g_use_36bit_addrs = true; - } else { - dev_err(dev, "40-bit DMA controller not found\n"); - return -EINVAL; - } - } - - /* Allocate space for the channels in coherent memory */ - slot_mem_size = PAGE_ALIGN(TOTAL_SLOTS * VCHIQ_SLOT_SIZE); - frag_mem_size = PAGE_ALIGN(g_fragments_size * MAX_FRAGMENTS); - - slot_mem = dmam_alloc_coherent(dev, slot_mem_size + frag_mem_size, - &slot_phys, GFP_KERNEL); - if (!slot_mem) { - dev_err(dev, "could not allocate DMA memory\n"); - return -ENOMEM; - } - - WARN_ON(((unsigned long)slot_mem & (PAGE_SIZE - 1)) != 0); - channelbase = slot_phys; - - vchiq_slot_zero = vchiq_init_slots(slot_mem, slot_mem_size); - if (!vchiq_slot_zero) - return -EINVAL; - - vchiq_slot_zero->platform_data[VCHIQ_PLATFORM_FRAGMENTS_OFFSET_IDX] = - channelbase + slot_mem_size; - vchiq_slot_zero->platform_data[VCHIQ_PLATFORM_FRAGMENTS_COUNT_IDX] = - MAX_FRAGMENTS; - - g_fragments_base = (char *)slot_mem + slot_mem_size; - - g_free_fragments = g_fragments_base; - for (i = 0; i < (MAX_FRAGMENTS - 1); i++) { - *(char **)&g_fragments_base[i*g_fragments_size] = - &g_fragments_base[(i + 1)*g_fragments_size]; - } - *(char **)&g_fragments_base[i * g_fragments_size] = NULL; - sema_init(&g_free_fragments_sema, MAX_FRAGMENTS); - - if (vchiq_init_state(state, vchiq_slot_zero) != VCHIQ_SUCCESS) - return -EINVAL; - - g_regs = devm_platform_ioremap_resource(pdev, 0); - if (IS_ERR(g_regs)) - return PTR_ERR(g_regs); - - irq = platform_get_irq(pdev, 0); - if (irq <= 0) - return irq; - - err = devm_request_irq(dev, irq, vchiq_doorbell_irq, IRQF_IRQPOLL, - "VCHIQ doorbell", state); - if (err) { - dev_err(dev, "failed to register irq=%d\n", irq); - return err; - } - - /* Send the base address of the slots to VideoCore */ - err = rpi_firmware_property(fw, RPI_FIRMWARE_VCHIQ_INIT, - &channelbase, sizeof(channelbase)); - if (err || channelbase) { - dev_err(dev, "failed to set channelbase\n"); - return err ? : -ENXIO; - } - - g_dev = dev; - g_dma_dev = dma_dev ?: dev; - g_dma_pool = dmam_pool_create("vchiq_scatter_pool", dev, - VCHIQ_DMA_POOL_SIZE, g_cache_line_size, - 0); - if (!g_dma_pool) { - dev_err(dev, "failed to create dma pool"); - return -ENOMEM; - } - - vchiq_log_info(vchiq_arm_log_level, - "vchiq_init - done (slots %pK, phys %pad)", - vchiq_slot_zero, &slot_phys); - - vchiq_call_connected_callbacks(); - - return 0; -} - -enum vchiq_status -vchiq_platform_init_state(struct vchiq_state *state) -{ - enum vchiq_status status = VCHIQ_SUCCESS; - struct vchiq_2835_state *platform_state; - - state->platform_state = kzalloc(sizeof(*platform_state), GFP_KERNEL); - if (!state->platform_state) - return VCHIQ_ERROR; - - platform_state = (struct vchiq_2835_state *)state->platform_state; - - platform_state->inited = 1; - status = vchiq_arm_init_state(state, &platform_state->arm_state); - - if (status != VCHIQ_SUCCESS) - platform_state->inited = 0; - - return status; -} - -struct vchiq_arm_state* -vchiq_platform_get_arm_state(struct vchiq_state *state) -{ - struct vchiq_2835_state *platform_state; - - platform_state = (struct vchiq_2835_state *)state->platform_state; - - WARN_ON_ONCE(!platform_state->inited); - - return &platform_state->arm_state; -} - -void -remote_event_signal(struct remote_event *event) -{ - wmb(); - - event->fired = 1; - - dsb(sy); /* data barrier operation */ - - if (event->armed) - writel(0, g_regs + BELL2); /* trigger vc interrupt */ -} - -enum vchiq_status -vchiq_prepare_bulk_data(struct vchiq_bulk *bulk, void *offset, - void __user *uoffset, int size, int dir) -{ - struct vchiq_pagelist_info *pagelistinfo; - - pagelistinfo = create_pagelist(offset, uoffset, size, - (dir == VCHIQ_BULK_RECEIVE) - ? PAGELIST_READ - : PAGELIST_WRITE); - - if (!pagelistinfo) - return VCHIQ_ERROR; - - bulk->data = pagelistinfo->dma_addr; - - /* - * Store the pagelistinfo address in remote_data, - * which isn't used by the slave. - */ - bulk->remote_data = pagelistinfo; - - return VCHIQ_SUCCESS; -} - -void -vchiq_complete_bulk(struct vchiq_bulk *bulk) -{ - if (bulk && bulk->remote_data && bulk->actual) - free_pagelist((struct vchiq_pagelist_info *)bulk->remote_data, - bulk->actual); -} - -int vchiq_dump_platform_state(void *dump_context) -{ - char buf[80]; - int len; - - len = snprintf(buf, sizeof(buf), - " Platform: 2835 (VC master)"); - return vchiq_dump(dump_context, buf, len + 1); -} - -/* - * Local functions - */ - -static irqreturn_t -vchiq_doorbell_irq(int irq, void *dev_id) -{ - struct vchiq_state *state = dev_id; - irqreturn_t ret = IRQ_NONE; - unsigned int status; - - /* Read (and clear) the doorbell */ - status = readl(g_regs + BELL0); - - if (status & 0x4) { /* Was the doorbell rung? */ - remote_event_pollall(state); - ret = IRQ_HANDLED; - } - - return ret; -} - -static void -cleanup_pagelistinfo(struct vchiq_pagelist_info *pagelistinfo) -{ - if (pagelistinfo->scatterlist_mapped) { - dma_unmap_sg(g_dma_dev, pagelistinfo->scatterlist, - pagelistinfo->num_pages, pagelistinfo->dma_dir); - } - - if (pagelistinfo->pages_need_release) - unpin_user_pages(pagelistinfo->pages, pagelistinfo->num_pages); - - if (pagelistinfo->is_from_pool) { - dma_pool_free(g_dma_pool, pagelistinfo->pagelist, - pagelistinfo->dma_addr); - } else { - dma_free_coherent(g_dev, pagelistinfo->pagelist_buffer_size, - pagelistinfo->pagelist, - pagelistinfo->dma_addr); - } -} - -/* There is a potential problem with partial cache lines (pages?) - * at the ends of the block when reading. If the CPU accessed anything in - * the same line (page?) then it may have pulled old data into the cache, - * obscuring the new data underneath. We can solve this by transferring the - * partial cache lines separately, and allowing the ARM to copy into the - * cached area. - */ - -static struct vchiq_pagelist_info * -create_pagelist(char *buf, char __user *ubuf, - size_t count, unsigned short type) -{ - struct pagelist *pagelist; - struct vchiq_pagelist_info *pagelistinfo; - struct page **pages; - u32 *addrs; - unsigned int num_pages, offset, i, k; - int actual_pages; - bool is_from_pool; - size_t pagelist_size; - struct scatterlist *scatterlist, *sg; - int dma_buffers; - dma_addr_t dma_addr; - - if (count >= INT_MAX - PAGE_SIZE) - return NULL; - - if (buf) - offset = (uintptr_t)buf & (PAGE_SIZE - 1); - else - offset = (uintptr_t)ubuf & (PAGE_SIZE - 1); - num_pages = DIV_ROUND_UP(count + offset, PAGE_SIZE); - - if (num_pages > (SIZE_MAX - sizeof(struct pagelist) - - sizeof(struct vchiq_pagelist_info)) / - (sizeof(u32) + sizeof(pages[0]) + - sizeof(struct scatterlist))) - return NULL; - - pagelist_size = sizeof(struct pagelist) + - (num_pages * sizeof(u32)) + - (num_pages * sizeof(pages[0]) + - (num_pages * sizeof(struct scatterlist))) + - sizeof(struct vchiq_pagelist_info); - - /* Allocate enough storage to hold the page pointers and the page - * list - */ - if (pagelist_size > VCHIQ_DMA_POOL_SIZE) { - pagelist = dma_alloc_coherent(g_dev, - pagelist_size, - &dma_addr, - GFP_KERNEL); - is_from_pool = false; - } else { - pagelist = dma_pool_alloc(g_dma_pool, GFP_KERNEL, &dma_addr); - is_from_pool = true; - } - - vchiq_log_trace(vchiq_arm_log_level, "%s - %pK", __func__, pagelist); - - if (!pagelist) - return NULL; - - addrs = pagelist->addrs; - pages = (struct page **)(addrs + num_pages); - scatterlist = (struct scatterlist *)(pages + num_pages); - pagelistinfo = (struct vchiq_pagelist_info *) - (scatterlist + num_pages); - - pagelist->length = count; - pagelist->type = type; - pagelist->offset = offset; - - /* Populate the fields of the pagelistinfo structure */ - pagelistinfo->pagelist = pagelist; - pagelistinfo->pagelist_buffer_size = pagelist_size; - pagelistinfo->dma_addr = dma_addr; - pagelistinfo->is_from_pool = is_from_pool; - pagelistinfo->dma_dir = (type == PAGELIST_WRITE) ? - DMA_TO_DEVICE : DMA_FROM_DEVICE; - pagelistinfo->num_pages = num_pages; - pagelistinfo->pages_need_release = 0; - pagelistinfo->pages = pages; - pagelistinfo->scatterlist = scatterlist; - pagelistinfo->scatterlist_mapped = 0; - - if (buf) { - unsigned long length = count; - unsigned int off = offset; - - for (actual_pages = 0; actual_pages < num_pages; - actual_pages++) { - struct page *pg = - vmalloc_to_page((buf + - (actual_pages * PAGE_SIZE))); - size_t bytes = PAGE_SIZE - off; - - if (!pg) { - cleanup_pagelistinfo(pagelistinfo); - return NULL; - } - - if (bytes > length) - bytes = length; - pages[actual_pages] = pg; - length -= bytes; - off = 0; - } - /* do not try and release vmalloc pages */ - } else { - actual_pages = pin_user_pages_fast( - (unsigned long)ubuf & PAGE_MASK, - num_pages, - type == PAGELIST_READ, - pages); - - if (actual_pages != num_pages) { - vchiq_log_info(vchiq_arm_log_level, - "%s - only %d/%d pages locked", - __func__, actual_pages, num_pages); - - /* This is probably due to the process being killed */ - if (actual_pages > 0) - unpin_user_pages(pages, actual_pages); - cleanup_pagelistinfo(pagelistinfo); - return NULL; - } - /* release user pages */ - pagelistinfo->pages_need_release = 1; - } - - /* - * Initialize the scatterlist so that the magic cookie - * is filled if debugging is enabled - */ - sg_init_table(scatterlist, num_pages); - /* Now set the pages for each scatterlist */ - for (i = 0; i < num_pages; i++) { - unsigned int len = PAGE_SIZE - offset; - - if (len > count) - len = count; - sg_set_page(scatterlist + i, pages[i], len, offset); - offset = 0; - count -= len; - } - - dma_buffers = dma_map_sg(g_dma_dev, - scatterlist, - num_pages, - pagelistinfo->dma_dir); - - if (dma_buffers == 0) { - cleanup_pagelistinfo(pagelistinfo); - return NULL; - } - - pagelistinfo->scatterlist_mapped = 1; - - /* Combine adjacent blocks for performance */ - k = 0; - if (g_use_36bit_addrs) { - for_each_sg(scatterlist, sg, dma_buffers, i) { - u32 len = sg_dma_len(sg); - u64 addr = sg_dma_address(sg); - u32 page_id = (u32)((addr >> 4) & ~0xff); - u32 sg_pages = (len + PAGE_SIZE - 1) >> PAGE_SHIFT; - - /* Note: addrs is the address + page_count - 1 - * The firmware expects blocks after the first to be page- - * aligned and a multiple of the page size - */ - WARN_ON(len == 0); - WARN_ON(i && - (i != (dma_buffers - 1)) && (len & ~PAGE_MASK)); - WARN_ON(i && (addr & ~PAGE_MASK)); - WARN_ON(upper_32_bits(addr) > 0xf); - if (k > 0 && - ((addrs[k - 1] & ~0xff) + - (((addrs[k - 1] & 0xff) + 1) << 8) - == page_id)) { - u32 inc_pages = min(sg_pages, - 0xff - (addrs[k - 1] & 0xff)); - addrs[k - 1] += inc_pages; - page_id += inc_pages << 8; - sg_pages -= inc_pages; - } - while (sg_pages) { - u32 inc_pages = min(sg_pages, 0x100u); - addrs[k++] = page_id | (inc_pages - 1); - page_id += inc_pages << 8; - sg_pages -= inc_pages; - } - } - } else { - for_each_sg(scatterlist, sg, dma_buffers, i) { - u32 len = sg_dma_len(sg); - u32 addr = sg_dma_address(sg); - u32 new_pages = (len + PAGE_SIZE - 1) >> PAGE_SHIFT; - - /* Note: addrs is the address + page_count - 1 - * The firmware expects blocks after the first to be page- - * aligned and a multiple of the page size - */ - WARN_ON(len == 0); - WARN_ON(i && (i != (dma_buffers - 1)) && (len & ~PAGE_MASK)); - WARN_ON(i && (addr & ~PAGE_MASK)); - if (k > 0 && - ((addrs[k - 1] & PAGE_MASK) + - (((addrs[k - 1] & ~PAGE_MASK) + 1) << PAGE_SHIFT)) - == (addr & PAGE_MASK)) - addrs[k - 1] += new_pages; - else - addrs[k++] = (addr & PAGE_MASK) | (new_pages - 1); - } - } - - /* Partial cache lines (fragments) require special measures */ - if ((type == PAGELIST_READ) && - ((pagelist->offset & (g_cache_line_size - 1)) || - ((pagelist->offset + pagelist->length) & - (g_cache_line_size - 1)))) { - char *fragments; - - if (down_interruptible(&g_free_fragments_sema)) { - cleanup_pagelistinfo(pagelistinfo); - return NULL; - } - - WARN_ON(!g_free_fragments); - - down(&g_free_fragments_mutex); - fragments = g_free_fragments; - WARN_ON(!fragments); - g_free_fragments = *(char **) g_free_fragments; - up(&g_free_fragments_mutex); - pagelist->type = PAGELIST_READ_WITH_FRAGMENTS + - (fragments - g_fragments_base) / g_fragments_size; - } - - return pagelistinfo; -} - -static void -free_pagelist(struct vchiq_pagelist_info *pagelistinfo, - int actual) -{ - struct pagelist *pagelist = pagelistinfo->pagelist; - struct page **pages = pagelistinfo->pages; - unsigned int num_pages = pagelistinfo->num_pages; - - vchiq_log_trace(vchiq_arm_log_level, "%s - %pK, %d", - __func__, pagelistinfo->pagelist, actual); - - /* - * NOTE: dma_unmap_sg must be called before the - * cpu can touch any of the data/pages. - */ - dma_unmap_sg(g_dma_dev, pagelistinfo->scatterlist, - pagelistinfo->num_pages, pagelistinfo->dma_dir); - pagelistinfo->scatterlist_mapped = 0; - - /* Deal with any partial cache lines (fragments) */ - if (pagelist->type >= PAGELIST_READ_WITH_FRAGMENTS) { - char *fragments = g_fragments_base + - (pagelist->type - PAGELIST_READ_WITH_FRAGMENTS) * - g_fragments_size; - int head_bytes, tail_bytes; - - head_bytes = (g_cache_line_size - pagelist->offset) & - (g_cache_line_size - 1); - tail_bytes = (pagelist->offset + actual) & - (g_cache_line_size - 1); - - if ((actual >= 0) && (head_bytes != 0)) { - if (head_bytes > actual) - head_bytes = actual; - - memcpy((char *)kmap(pages[0]) + - pagelist->offset, - fragments, - head_bytes); - kunmap(pages[0]); - } - if ((actual >= 0) && (head_bytes < actual) && - (tail_bytes != 0)) { - memcpy((char *)kmap(pages[num_pages - 1]) + - ((pagelist->offset + actual) & - (PAGE_SIZE - 1) & ~(g_cache_line_size - 1)), - fragments + g_cache_line_size, - tail_bytes); - kunmap(pages[num_pages - 1]); - } - - down(&g_free_fragments_mutex); - *(char **)fragments = g_free_fragments; - g_free_fragments = fragments; - up(&g_free_fragments_mutex); - up(&g_free_fragments_sema); - } - - /* Need to mark all the pages dirty. */ - if (pagelist->type != PAGELIST_WRITE && - pagelistinfo->pages_need_release) { - unsigned int i; - - for (i = 0; i < num_pages; i++) - set_page_dirty(pages[i]); - } - - cleanup_pagelistinfo(pagelistinfo); -} diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c index 06a19d6e2674..6e082ab29c1f 100644 --- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c +++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c @@ -25,15 +25,32 @@ #include #include #include +#include +#include +#include #include #include "vchiq_core.h" #include "vchiq_ioctl.h" #include "vchiq_arm.h" #include "vchiq_debugfs.h" +#include "vchiq_connected.h" +#include "vchiq_pagelist.h" #define DEVICE_NAME "vchiq" +#define TOTAL_SLOTS (VCHIQ_SLOT_ZERO_SLOTS + 2 * 32) + +#define MAX_FRAGMENTS (VCHIQ_NUM_CURRENT_BULKS * 2) + +#define VCHIQ_PLATFORM_FRAGMENTS_OFFSET_IDX 0 +#define VCHIQ_PLATFORM_FRAGMENTS_COUNT_IDX 1 + +#define BELL0 0x00 +#define BELL2 0x08 + +#define VCHIQ_DMA_POOL_SIZE PAGE_SIZE + /* Override the default prefix, which would be vchiq_arm (from the filename) */ #undef MODULE_PARAM_PREFIX #define MODULE_PARAM_PREFIX DEVICE_NAME "." @@ -67,10 +84,628 @@ static struct vchiq_drvdata bcm2711_drvdata = { .use_36bit_addrs = true, }; +struct vchiq_2835_state { + int inited; + struct vchiq_arm_state arm_state; +}; + +struct vchiq_pagelist_info { + struct pagelist *pagelist; + size_t pagelist_buffer_size; + dma_addr_t dma_addr; + bool is_from_pool; + enum dma_data_direction dma_dir; + unsigned int num_pages; + unsigned int pages_need_release; + struct page **pages; + struct scatterlist *scatterlist; + unsigned int scatterlist_mapped; +}; + +static void __iomem *g_regs; +/* This value is the size of the L2 cache lines as understood by the + * VPU firmware, which determines the required alignment of the + * offsets/sizes in pagelists. + * + * Modern VPU firmware looks for a DT "cache-line-size" property in + * the VCHIQ node and will overwrite it with the actual L2 cache size, + * which the kernel must then respect. That property was rejected + * upstream, so we have to use the VPU firmware's compatibility value + * of 32. + */ +static unsigned int g_cache_line_size = 32; +static struct dma_pool *g_dma_pool; +static unsigned int g_use_36bit_addrs = 0; +static unsigned int g_fragments_size; +static char *g_fragments_base; +static char *g_free_fragments; +static struct semaphore g_free_fragments_sema; +static struct device *g_dev; +static struct device *g_dma_dev; + +static DEFINE_SEMAPHORE(g_free_fragments_mutex); + +static irqreturn_t +vchiq_doorbell_irq(int irq, void *dev_id); + +static struct vchiq_pagelist_info * +create_pagelist(char *buf, char __user *ubuf, size_t count, unsigned short type); + +static void +free_pagelist(struct vchiq_pagelist_info *pagelistinfo, + int actual); + static enum vchiq_status vchiq_blocking_bulk_transfer(unsigned int handle, void *data, unsigned int size, enum vchiq_bulk_dir dir); +int vchiq_platform_init(struct platform_device *pdev, struct vchiq_state *state) +{ + struct device *dev = &pdev->dev; + struct device *dma_dev = NULL; + struct vchiq_drvdata *drvdata = platform_get_drvdata(pdev); + struct rpi_firmware *fw = drvdata->fw; + struct vchiq_slot_zero *vchiq_slot_zero; + void *slot_mem; + dma_addr_t slot_phys; + u32 channelbase; + int slot_mem_size, frag_mem_size; + int err, irq, i; + + /* + * VCHI messages between the CPU and firmware use + * 32-bit bus addresses. + */ + err = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32)); + + if (err < 0) + return err; + + g_cache_line_size = drvdata->cache_line_size; + g_fragments_size = 2 * g_cache_line_size; + + if (drvdata->use_36bit_addrs) { + struct device_node *dma_node = + of_find_compatible_node(NULL, NULL, "brcm,bcm2711-dma"); + + if (dma_node) { + struct platform_device *pdev; + + pdev = of_find_device_by_node(dma_node); + if (pdev) + dma_dev = &pdev->dev; + of_node_put(dma_node); + g_use_36bit_addrs = true; + } else { + dev_err(dev, "40-bit DMA controller not found\n"); + return -EINVAL; + } + } + + /* Allocate space for the channels in coherent memory */ + slot_mem_size = PAGE_ALIGN(TOTAL_SLOTS * VCHIQ_SLOT_SIZE); + frag_mem_size = PAGE_ALIGN(g_fragments_size * MAX_FRAGMENTS); + + slot_mem = dmam_alloc_coherent(dev, slot_mem_size + frag_mem_size, + &slot_phys, GFP_KERNEL); + if (!slot_mem) { + dev_err(dev, "could not allocate DMA memory\n"); + return -ENOMEM; + } + + WARN_ON(((unsigned long)slot_mem & (PAGE_SIZE - 1)) != 0); + channelbase = slot_phys; + + vchiq_slot_zero = vchiq_init_slots(slot_mem, slot_mem_size); + if (!vchiq_slot_zero) + return -EINVAL; + + vchiq_slot_zero->platform_data[VCHIQ_PLATFORM_FRAGMENTS_OFFSET_IDX] = + channelbase + slot_mem_size; + vchiq_slot_zero->platform_data[VCHIQ_PLATFORM_FRAGMENTS_COUNT_IDX] = + MAX_FRAGMENTS; + + g_fragments_base = (char *)slot_mem + slot_mem_size; + + g_free_fragments = g_fragments_base; + for (i = 0; i < (MAX_FRAGMENTS - 1); i++) { + *(char **)&g_fragments_base[i*g_fragments_size] = + &g_fragments_base[(i + 1)*g_fragments_size]; + } + *(char **)&g_fragments_base[i * g_fragments_size] = NULL; + sema_init(&g_free_fragments_sema, MAX_FRAGMENTS); + + if (vchiq_init_state(state, vchiq_slot_zero) != VCHIQ_SUCCESS) + return -EINVAL; + + g_regs = devm_platform_ioremap_resource(pdev, 0); + if (IS_ERR(g_regs)) + return PTR_ERR(g_regs); + + irq = platform_get_irq(pdev, 0); + if (irq <= 0) + return irq; + + err = devm_request_irq(dev, irq, vchiq_doorbell_irq, IRQF_IRQPOLL, + "VCHIQ doorbell", state); + if (err) { + dev_err(dev, "failed to register irq=%d\n", irq); + return err; + } + + /* Send the base address of the slots to VideoCore */ + err = rpi_firmware_property(fw, RPI_FIRMWARE_VCHIQ_INIT, + &channelbase, sizeof(channelbase)); + if (err || channelbase) { + dev_err(dev, "failed to set channelbase\n"); + return err ? : -ENXIO; + } + + g_dev = dev; + g_dma_dev = dma_dev ?: dev; + g_dma_pool = dmam_pool_create("vchiq_scatter_pool", dev, + VCHIQ_DMA_POOL_SIZE, g_cache_line_size, + 0); + if (!g_dma_pool) { + dev_err(dev, "failed to create dma pool"); + return -ENOMEM; + } + + vchiq_log_info(vchiq_arm_log_level, + "vchiq_init - done (slots %pK, phys %pad)", + vchiq_slot_zero, &slot_phys); + + vchiq_call_connected_callbacks(); + + return 0; +} + +enum vchiq_status +vchiq_platform_init_state(struct vchiq_state *state) +{ + enum vchiq_status status = VCHIQ_SUCCESS; + struct vchiq_2835_state *platform_state; + + state->platform_state = kzalloc(sizeof(*platform_state), GFP_KERNEL); + if (!state->platform_state) + return VCHIQ_ERROR; + + platform_state = (struct vchiq_2835_state *)state->platform_state; + + platform_state->inited = 1; + status = vchiq_arm_init_state(state, &platform_state->arm_state); + + if (status != VCHIQ_SUCCESS) + platform_state->inited = 0; + + return status; +} + +struct vchiq_arm_state* +vchiq_platform_get_arm_state(struct vchiq_state *state) +{ + struct vchiq_2835_state *platform_state; + + platform_state = (struct vchiq_2835_state *)state->platform_state; + + WARN_ON_ONCE(!platform_state->inited); + + return &platform_state->arm_state; +} + +void +remote_event_signal(struct remote_event *event) +{ + wmb(); + + event->fired = 1; + + dsb(sy); /* data barrier operation */ + + if (event->armed) + writel(0, g_regs + BELL2); /* trigger vc interrupt */ +} + +enum vchiq_status +vchiq_prepare_bulk_data(struct vchiq_bulk *bulk, void *offset, + void __user *uoffset, int size, int dir) +{ + struct vchiq_pagelist_info *pagelistinfo; + + pagelistinfo = create_pagelist(offset, uoffset, size, + (dir == VCHIQ_BULK_RECEIVE) + ? PAGELIST_READ + : PAGELIST_WRITE); + + if (!pagelistinfo) + return VCHIQ_ERROR; + + bulk->data = pagelistinfo->dma_addr; + + /* + * Store the pagelistinfo address in remote_data, + * which isn't used by the slave. + */ + bulk->remote_data = pagelistinfo; + + return VCHIQ_SUCCESS; +} + +void +vchiq_complete_bulk(struct vchiq_bulk *bulk) +{ + if (bulk && bulk->remote_data && bulk->actual) + free_pagelist((struct vchiq_pagelist_info *)bulk->remote_data, + bulk->actual); +} + +int vchiq_dump_platform_state(void *dump_context) +{ + char buf[80]; + int len; + + len = snprintf(buf, sizeof(buf), + " Platform: 2835 (VC master)"); + return vchiq_dump(dump_context, buf, len + 1); +} + +/* + * Local functions + */ + +static irqreturn_t +vchiq_doorbell_irq(int irq, void *dev_id) +{ + struct vchiq_state *state = dev_id; + irqreturn_t ret = IRQ_NONE; + unsigned int status; + + /* Read (and clear) the doorbell */ + status = readl(g_regs + BELL0); + + if (status & 0x4) { /* Was the doorbell rung? */ + remote_event_pollall(state); + ret = IRQ_HANDLED; + } + + return ret; +} + +static void +cleanup_pagelistinfo(struct vchiq_pagelist_info *pagelistinfo) +{ + if (pagelistinfo->scatterlist_mapped) { + dma_unmap_sg(g_dma_dev, pagelistinfo->scatterlist, + pagelistinfo->num_pages, pagelistinfo->dma_dir); + } + + if (pagelistinfo->pages_need_release) + unpin_user_pages(pagelistinfo->pages, pagelistinfo->num_pages); + + if (pagelistinfo->is_from_pool) { + dma_pool_free(g_dma_pool, pagelistinfo->pagelist, + pagelistinfo->dma_addr); + } else { + dma_free_coherent(g_dev, pagelistinfo->pagelist_buffer_size, + pagelistinfo->pagelist, + pagelistinfo->dma_addr); + } +} + +/* There is a potential problem with partial cache lines (pages?) + * at the ends of the block when reading. If the CPU accessed anything in + * the same line (page?) then it may have pulled old data into the cache, + * obscuring the new data underneath. We can solve this by transferring the + * partial cache lines separately, and allowing the ARM to copy into the + * cached area. + */ + +static struct vchiq_pagelist_info * +create_pagelist(char *buf, char __user *ubuf, + size_t count, unsigned short type) +{ + struct pagelist *pagelist; + struct vchiq_pagelist_info *pagelistinfo; + struct page **pages; + u32 *addrs; + unsigned int num_pages, offset, i, k; + int actual_pages; + bool is_from_pool; + size_t pagelist_size; + struct scatterlist *scatterlist, *sg; + int dma_buffers; + dma_addr_t dma_addr; + + if (count >= INT_MAX - PAGE_SIZE) + return NULL; + + if (buf) + offset = (uintptr_t)buf & (PAGE_SIZE - 1); + else + offset = (uintptr_t)ubuf & (PAGE_SIZE - 1); + num_pages = DIV_ROUND_UP(count + offset, PAGE_SIZE); + + if (num_pages > (SIZE_MAX - sizeof(struct pagelist) - + sizeof(struct vchiq_pagelist_info)) / + (sizeof(u32) + sizeof(pages[0]) + + sizeof(struct scatterlist))) + return NULL; + + pagelist_size = sizeof(struct pagelist) + + (num_pages * sizeof(u32)) + + (num_pages * sizeof(pages[0]) + + (num_pages * sizeof(struct scatterlist))) + + sizeof(struct vchiq_pagelist_info); + + /* Allocate enough storage to hold the page pointers and the page + * list + */ + if (pagelist_size > VCHIQ_DMA_POOL_SIZE) { + pagelist = dma_alloc_coherent(g_dev, + pagelist_size, + &dma_addr, + GFP_KERNEL); + is_from_pool = false; + } else { + pagelist = dma_pool_alloc(g_dma_pool, GFP_KERNEL, &dma_addr); + is_from_pool = true; + } + + vchiq_log_trace(vchiq_arm_log_level, "%s - %pK", __func__, pagelist); + + if (!pagelist) + return NULL; + + addrs = pagelist->addrs; + pages = (struct page **)(addrs + num_pages); + scatterlist = (struct scatterlist *)(pages + num_pages); + pagelistinfo = (struct vchiq_pagelist_info *) + (scatterlist + num_pages); + + pagelist->length = count; + pagelist->type = type; + pagelist->offset = offset; + + /* Populate the fields of the pagelistinfo structure */ + pagelistinfo->pagelist = pagelist; + pagelistinfo->pagelist_buffer_size = pagelist_size; + pagelistinfo->dma_addr = dma_addr; + pagelistinfo->is_from_pool = is_from_pool; + pagelistinfo->dma_dir = (type == PAGELIST_WRITE) ? + DMA_TO_DEVICE : DMA_FROM_DEVICE; + pagelistinfo->num_pages = num_pages; + pagelistinfo->pages_need_release = 0; + pagelistinfo->pages = pages; + pagelistinfo->scatterlist = scatterlist; + pagelistinfo->scatterlist_mapped = 0; + + if (buf) { + unsigned long length = count; + unsigned int off = offset; + + for (actual_pages = 0; actual_pages < num_pages; + actual_pages++) { + struct page *pg = + vmalloc_to_page((buf + + (actual_pages * PAGE_SIZE))); + size_t bytes = PAGE_SIZE - off; + + if (!pg) { + cleanup_pagelistinfo(pagelistinfo); + return NULL; + } + + if (bytes > length) + bytes = length; + pages[actual_pages] = pg; + length -= bytes; + off = 0; + } + /* do not try and release vmalloc pages */ + } else { + actual_pages = pin_user_pages_fast( + (unsigned long)ubuf & PAGE_MASK, + num_pages, + type == PAGELIST_READ, + pages); + + if (actual_pages != num_pages) { + vchiq_log_info(vchiq_arm_log_level, + "%s - only %d/%d pages locked", + __func__, actual_pages, num_pages); + + /* This is probably due to the process being killed */ + if (actual_pages > 0) + unpin_user_pages(pages, actual_pages); + cleanup_pagelistinfo(pagelistinfo); + return NULL; + } + /* release user pages */ + pagelistinfo->pages_need_release = 1; + } + + /* + * Initialize the scatterlist so that the magic cookie + * is filled if debugging is enabled + */ + sg_init_table(scatterlist, num_pages); + /* Now set the pages for each scatterlist */ + for (i = 0; i < num_pages; i++) { + unsigned int len = PAGE_SIZE - offset; + + if (len > count) + len = count; + sg_set_page(scatterlist + i, pages[i], len, offset); + offset = 0; + count -= len; + } + + dma_buffers = dma_map_sg(g_dma_dev, + scatterlist, + num_pages, + pagelistinfo->dma_dir); + + if (dma_buffers == 0) { + cleanup_pagelistinfo(pagelistinfo); + return NULL; + } + + pagelistinfo->scatterlist_mapped = 1; + + /* Combine adjacent blocks for performance */ + k = 0; + if (g_use_36bit_addrs) { + for_each_sg(scatterlist, sg, dma_buffers, i) { + u32 len = sg_dma_len(sg); + u64 addr = sg_dma_address(sg); + u32 page_id = (u32)((addr >> 4) & ~0xff); + u32 sg_pages = (len + PAGE_SIZE - 1) >> PAGE_SHIFT; + + /* Note: addrs is the address + page_count - 1 + * The firmware expects blocks after the first to be page- + * aligned and a multiple of the page size + */ + WARN_ON(len == 0); + WARN_ON(i && + (i != (dma_buffers - 1)) && (len & ~PAGE_MASK)); + WARN_ON(i && (addr & ~PAGE_MASK)); + WARN_ON(upper_32_bits(addr) > 0xf); + if (k > 0 && + ((addrs[k - 1] & ~0xff) + + (((addrs[k - 1] & 0xff) + 1) << 8) + == page_id)) { + u32 inc_pages = min(sg_pages, + 0xff - (addrs[k - 1] & 0xff)); + addrs[k - 1] += inc_pages; + page_id += inc_pages << 8; + sg_pages -= inc_pages; + } + while (sg_pages) { + u32 inc_pages = min(sg_pages, 0x100u); + addrs[k++] = page_id | (inc_pages - 1); + page_id += inc_pages << 8; + sg_pages -= inc_pages; + } + } + } else { + for_each_sg(scatterlist, sg, dma_buffers, i) { + u32 len = sg_dma_len(sg); + u32 addr = sg_dma_address(sg); + u32 new_pages = (len + PAGE_SIZE - 1) >> PAGE_SHIFT; + + /* Note: addrs is the address + page_count - 1 + * The firmware expects blocks after the first to be page- + * aligned and a multiple of the page size + */ + WARN_ON(len == 0); + WARN_ON(i && (i != (dma_buffers - 1)) && (len & ~PAGE_MASK)); + WARN_ON(i && (addr & ~PAGE_MASK)); + if (k > 0 && + ((addrs[k - 1] & PAGE_MASK) + + (((addrs[k - 1] & ~PAGE_MASK) + 1) << PAGE_SHIFT)) + == (addr & PAGE_MASK)) + addrs[k - 1] += new_pages; + else + addrs[k++] = (addr & PAGE_MASK) | (new_pages - 1); + } + } + + /* Partial cache lines (fragments) require special measures */ + if ((type == PAGELIST_READ) && + ((pagelist->offset & (g_cache_line_size - 1)) || + ((pagelist->offset + pagelist->length) & + (g_cache_line_size - 1)))) { + char *fragments; + + if (down_interruptible(&g_free_fragments_sema)) { + cleanup_pagelistinfo(pagelistinfo); + return NULL; + } + + WARN_ON(!g_free_fragments); + + down(&g_free_fragments_mutex); + fragments = g_free_fragments; + WARN_ON(!fragments); + g_free_fragments = *(char **) g_free_fragments; + up(&g_free_fragments_mutex); + pagelist->type = PAGELIST_READ_WITH_FRAGMENTS + + (fragments - g_fragments_base) / g_fragments_size; + } + + return pagelistinfo; +} + +static void +free_pagelist(struct vchiq_pagelist_info *pagelistinfo, + int actual) +{ + struct pagelist *pagelist = pagelistinfo->pagelist; + struct page **pages = pagelistinfo->pages; + unsigned int num_pages = pagelistinfo->num_pages; + + vchiq_log_trace(vchiq_arm_log_level, "%s - %pK, %d", + __func__, pagelistinfo->pagelist, actual); + + /* + * NOTE: dma_unmap_sg must be called before the + * cpu can touch any of the data/pages. + */ + dma_unmap_sg(g_dma_dev, pagelistinfo->scatterlist, + pagelistinfo->num_pages, pagelistinfo->dma_dir); + pagelistinfo->scatterlist_mapped = 0; + + /* Deal with any partial cache lines (fragments) */ + if (pagelist->type >= PAGELIST_READ_WITH_FRAGMENTS) { + char *fragments = g_fragments_base + + (pagelist->type - PAGELIST_READ_WITH_FRAGMENTS) * + g_fragments_size; + int head_bytes, tail_bytes; + + head_bytes = (g_cache_line_size - pagelist->offset) & + (g_cache_line_size - 1); + tail_bytes = (pagelist->offset + actual) & + (g_cache_line_size - 1); + + if ((actual >= 0) && (head_bytes != 0)) { + if (head_bytes > actual) + head_bytes = actual; + + memcpy((char *)kmap(pages[0]) + + pagelist->offset, + fragments, + head_bytes); + kunmap(pages[0]); + } + if ((actual >= 0) && (head_bytes < actual) && + (tail_bytes != 0)) { + memcpy((char *)kmap(pages[num_pages - 1]) + + ((pagelist->offset + actual) & + (PAGE_SIZE - 1) & ~(g_cache_line_size - 1)), + fragments + g_cache_line_size, + tail_bytes); + kunmap(pages[num_pages - 1]); + } + + down(&g_free_fragments_mutex); + *(char **)fragments = g_free_fragments; + g_free_fragments = fragments; + up(&g_free_fragments_mutex); + up(&g_free_fragments_sema); + } + + /* Need to mark all the pages dirty. */ + if (pagelist->type != PAGELIST_WRITE && + pagelistinfo->pages_need_release) { + unsigned int i; + + for (i = 0; i < num_pages; i++) + set_page_dirty(pages[i]); + } + + cleanup_pagelistinfo(pagelistinfo); +} + #define VCHIQ_INIT_RETRIES 10 enum vchiq_status vchiq_initialise(struct vchiq_instance **instance_out) {