From patchwork Fri Apr 26 18:32:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Halil Pasic X-Patchwork-Id: 10919571 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E4E28933 for ; Fri, 26 Apr 2019 18:33:09 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D5A4728E5A for ; Fri, 26 Apr 2019 18:33:09 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C9B0428E5D; Fri, 26 Apr 2019 18:33:09 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 31C8F28E5A for ; Fri, 26 Apr 2019 18:33:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726400AbfDZSdI (ORCPT ); Fri, 26 Apr 2019 14:33:08 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:33194 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726326AbfDZSdH (ORCPT ); Fri, 26 Apr 2019 14:33:07 -0400 Received: from pps.filterd (m0098414.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x3QIJga5041446 for ; Fri, 26 Apr 2019 14:33:05 -0400 Received: from e06smtp02.uk.ibm.com (e06smtp02.uk.ibm.com [195.75.94.98]) by mx0b-001b2d01.pphosted.com with ESMTP id 2s46kfa7xe-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Fri, 26 Apr 2019 14:33:05 -0400 Received: from localhost by e06smtp02.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 26 Apr 2019 19:33:03 +0100 Received: from b06cxnps4074.portsmouth.uk.ibm.com (9.149.109.196) by e06smtp02.uk.ibm.com (192.168.101.132) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Fri, 26 Apr 2019 19:33:00 +0100 Received: from d06av21.portsmouth.uk.ibm.com (d06av21.portsmouth.uk.ibm.com [9.149.105.232]) by b06cxnps4074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x3QIWxuI48365734 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 26 Apr 2019 18:32:59 GMT Received: from d06av21.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id EC40D52054; Fri, 26 Apr 2019 18:32:58 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by d06av21.portsmouth.uk.ibm.com (Postfix) with ESMTPS id 5673E52050; Fri, 26 Apr 2019 18:32:58 +0000 (GMT) From: Halil Pasic To: kvm@vger.kernel.org, linux-s390@vger.kernel.org, Cornelia Huck , Martin Schwidefsky , Sebastian Ott Cc: Halil Pasic , virtualization@lists.linux-foundation.org, "Michael S. Tsirkin" , Christoph Hellwig , Thomas Huth , Christian Borntraeger , Viktor Mihajlovski , Vasily Gorbik , Janosch Frank , Claudio Imbrenda , Farhan Ali , Eric Farman Subject: [PATCH 05/10] s390/cio: introduce DMA pools to cio Date: Fri, 26 Apr 2019 20:32:40 +0200 X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190426183245.37939-1-pasic@linux.ibm.com> References: <20190426183245.37939-1-pasic@linux.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 19042618-0008-0000-0000-000002E09B80 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 19042618-0009-0000-0000-0000224CFA3A Message-Id: <20190426183245.37939-6-pasic@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2019-04-26_12:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=2 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1904260124 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP To support protected virtualization cio will need to make sure the memory used for communication with the hypervisor is DMA memory. Let us introduce one global cio, and some tools for pools seated at individual devices. Our DMA pools are implemented as a gen_pool backed with DMA pages. The idea is to avoid each allocation effectively wasting a page, as we typically allocate much less than PAGE_SIZE. Signed-off-by: Halil Pasic --- arch/s390/Kconfig | 1 + arch/s390/include/asm/cio.h | 11 +++++ drivers/s390/cio/cio.h | 1 + drivers/s390/cio/css.c | 101 ++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 114 insertions(+) diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig index 5500d05d4d53..5861311d95d9 100644 --- a/arch/s390/Kconfig +++ b/arch/s390/Kconfig @@ -195,6 +195,7 @@ config S390 select VIRT_TO_BUS select HAVE_NMI select SWIOTLB + select GENERIC_ALLOCATOR config SCHED_OMIT_FRAME_POINTER diff --git a/arch/s390/include/asm/cio.h b/arch/s390/include/asm/cio.h index 1727180e8ca1..43c007d2775a 100644 --- a/arch/s390/include/asm/cio.h +++ b/arch/s390/include/asm/cio.h @@ -328,6 +328,17 @@ static inline u8 pathmask_to_pos(u8 mask) void channel_subsystem_reinit(void); extern void css_schedule_reprobe(void); +extern void *cio_dma_zalloc(size_t size); +extern void cio_dma_free(void *cpu_addr, size_t size); +extern struct device *cio_get_dma_css_dev(void); + +struct gen_pool; +void *cio_gp_dma_zalloc(struct gen_pool *gp_dma, struct device *dma_dev, + size_t size); +void cio_gp_dma_free(struct gen_pool *gp_dma, void *cpu_addr, size_t size); +void cio_gp_dma_destroy(struct gen_pool *gp_dma, struct device *dma_dev); +struct gen_pool *cio_gp_dma_create(struct device *dma_dev, int nr_pages); + /* Function from drivers/s390/cio/chsc.c */ int chsc_sstpc(void *page, unsigned int op, u16 ctrl, u64 *clock_delta); int chsc_sstpi(void *page, void *result, size_t size); diff --git a/drivers/s390/cio/cio.h b/drivers/s390/cio/cio.h index 92eabbb5f18d..f23f7e2c33f7 100644 --- a/drivers/s390/cio/cio.h +++ b/drivers/s390/cio/cio.h @@ -113,6 +113,7 @@ struct subchannel { enum sch_todo todo; struct work_struct todo_work; struct schib_config config; + u64 dma_mask; } __attribute__ ((aligned(8))); DECLARE_PER_CPU_ALIGNED(struct irb, cio_irb); diff --git a/drivers/s390/cio/css.c b/drivers/s390/cio/css.c index aea502922646..7087cc314fe9 100644 --- a/drivers/s390/cio/css.c +++ b/drivers/s390/cio/css.c @@ -20,6 +20,8 @@ #include #include #include +#include +#include #include #include @@ -199,6 +201,8 @@ static int css_validate_subchannel(struct subchannel_id schid, return err; } +static u64 css_dev_dma_mask = DMA_BIT_MASK(31); + struct subchannel *css_alloc_subchannel(struct subchannel_id schid, struct schib *schib) { @@ -224,6 +228,9 @@ struct subchannel *css_alloc_subchannel(struct subchannel_id schid, INIT_WORK(&sch->todo_work, css_sch_todo); sch->dev.release = &css_subchannel_release; device_initialize(&sch->dev); + sch->dma_mask = css_dev_dma_mask; + sch->dev.dma_mask = &sch->dma_mask; + sch->dev.coherent_dma_mask = sch->dma_mask; return sch; err: @@ -899,6 +906,9 @@ static int __init setup_css(int nr) dev_set_name(&css->device, "css%x", nr); css->device.groups = cssdev_attr_groups; css->device.release = channel_subsystem_release; + /* some cio DMA memory needs to be 31 bit addressable */ + css->device.coherent_dma_mask = css_dev_dma_mask, + css->device.dma_mask = &css_dev_dma_mask; mutex_init(&css->mutex); css->cssid = chsc_get_cssid(nr); @@ -1018,6 +1028,96 @@ static struct notifier_block css_power_notifier = { .notifier_call = css_power_event, }; +#define POOL_INIT_PAGES 1 +static struct gen_pool *cio_dma_pool; +/* Currently cio supports only a single css */ +#define CIO_DMA_GFP (GFP_KERNEL | __GFP_ZERO) + + +struct device *cio_get_dma_css_dev(void) +{ + return &channel_subsystems[0]->device; +} + +struct gen_pool *cio_gp_dma_create(struct device *dma_dev, int nr_pages) +{ + struct gen_pool *gp_dma; + void *cpu_addr; + dma_addr_t dma_addr; + int i; + + gp_dma = gen_pool_create(3, -1); + if (!gp_dma) + return NULL; + for (i = 0; i < nr_pages; ++i) { + cpu_addr = dma_alloc_coherent(dma_dev, PAGE_SIZE, &dma_addr, + CIO_DMA_GFP); + if (!cpu_addr) + return gp_dma; + gen_pool_add_virt(gp_dma, (unsigned long) cpu_addr, + dma_addr, PAGE_SIZE, -1); + } + return gp_dma; +} + +static void __gp_dma_free_dma(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + dma_free_coherent((struct device *) data, PAGE_SIZE, + (void *) chunk->start_addr, + (dma_addr_t) chunk->phys_addr); +} + +void cio_gp_dma_destroy(struct gen_pool *gp_dma, struct device *dma_dev) +{ + if (!gp_dma) + return; + /* this is qite ugly but no better idea */ + gen_pool_for_each_chunk(gp_dma, __gp_dma_free_dma, dma_dev); + gen_pool_destroy(gp_dma); +} + +static void __init cio_dma_pool_init(void) +{ + /* No need to free up the resources: compiled in */ + cio_dma_pool = cio_gp_dma_create(cio_get_dma_css_dev(), 1); +} + +void *cio_gp_dma_zalloc(struct gen_pool *gp_dma, struct device *dma_dev, + size_t size) +{ + dma_addr_t dma_addr; + unsigned long addr = gen_pool_alloc(gp_dma, size); + + if (!addr) { + addr = (unsigned long) dma_alloc_coherent(dma_dev, + PAGE_SIZE, &dma_addr, CIO_DMA_GFP); + if (!addr) + return NULL; + gen_pool_add_virt(gp_dma, addr, dma_addr, PAGE_SIZE, -1); + addr = gen_pool_alloc(gp_dma, size); + } + return (void *) addr; +} + +void cio_gp_dma_free(struct gen_pool *gp_dma, void *cpu_addr, size_t size) +{ + if (!cpu_addr) + return; + memset(cpu_addr, 0, size); + gen_pool_free(gp_dma, (unsigned long) cpu_addr, size); +} + +void *cio_dma_zalloc(size_t size) +{ + return cio_gp_dma_zalloc(cio_dma_pool, cio_get_dma_css_dev(), size); +} + +void cio_dma_free(void *cpu_addr, size_t size) +{ + cio_gp_dma_free(cio_dma_pool, cpu_addr, size); +} + /* * Now that the driver core is running, we can setup our channel subsystem. * The struct subchannel's are created during probing. @@ -1063,6 +1163,7 @@ static int __init css_bus_init(void) unregister_reboot_notifier(&css_reboot_notifier); goto out_unregister; } + cio_dma_pool_init(); css_init_done = 1; /* Enable default isc for I/O subchannels. */