From patchwork Thu May 5 08:16:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 12839149 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 365A8C433EF for ; Thu, 5 May 2022 08:17:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347842AbiEEIVS (ORCPT ); Thu, 5 May 2022 04:21:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54164 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347056AbiEEIUZ (ORCPT ); Thu, 5 May 2022 04:20:25 -0400 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 59AD044A13; Thu, 5 May 2022 01:16:46 -0700 (PDT) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id DDDEC1F855; Thu, 5 May 2022 08:16:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1651738604; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Wb0uPnKVxH6X1fgBon9Xmych7JExQqCQIYQDdDvAVlQ=; b=k1MwMA6LVKHn4L9RbGeodLm/ou5drbiC+/MbupQdZY0wZom9Er4mw5atok2+PHFm4+7p5o pe7ibN48YJRZ4BpfaAyM4qGPRz6VZ3mDCKlIU2/4/HGucH1zu9hjHWvMmeidZ0WHufVZCh LzZlGjtJFZCHchL/8oPj8rkEjUasN8Y= Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 945B313B12; Thu, 5 May 2022 08:16:44 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id cF3JIuyHc2K1BwAAMHmgww (envelope-from ); Thu, 05 May 2022 08:16:44 +0000 From: Juergen Gross To: xen-devel@lists.xenproject.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Juergen Gross , Boris Ostrovsky , Stefano Stabellini , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Subject: [PATCH v3 04/21] xen/netfront: switch netfront to use INVALID_GRANT_REF Date: Thu, 5 May 2022 10:16:23 +0200 Message-Id: <20220505081640.17425-5-jgross@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220505081640.17425-1-jgross@suse.com> References: <20220505081640.17425-1-jgross@suse.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Instead of using a private macro for an invalid grant reference use the common one. Signed-off-by: Juergen Gross --- drivers/net/xen-netfront.c | 36 +++++++++++++++++------------------- 1 file changed, 17 insertions(+), 19 deletions(-) diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c index e2b4a1893a13..af3d3de7d9fa 100644 --- a/drivers/net/xen-netfront.c +++ b/drivers/net/xen-netfront.c @@ -78,8 +78,6 @@ struct netfront_cb { #define RX_COPY_THRESHOLD 256 -#define GRANT_INVALID_REF 0 - #define NET_TX_RING_SIZE __CONST_RING_SIZE(xen_netif_tx, XEN_PAGE_SIZE) #define NET_RX_RING_SIZE __CONST_RING_SIZE(xen_netif_rx, XEN_PAGE_SIZE) @@ -224,7 +222,7 @@ static grant_ref_t xennet_get_rx_ref(struct netfront_queue *queue, { int i = xennet_rxidx(ri); grant_ref_t ref = queue->grant_rx_ref[i]; - queue->grant_rx_ref[i] = GRANT_INVALID_REF; + queue->grant_rx_ref[i] = INVALID_GRANT_REF; return ref; } @@ -432,7 +430,7 @@ static bool xennet_tx_buf_gc(struct netfront_queue *queue) } gnttab_release_grant_reference( &queue->gref_tx_head, queue->grant_tx_ref[id]); - queue->grant_tx_ref[id] = GRANT_INVALID_REF; + queue->grant_tx_ref[id] = INVALID_GRANT_REF; queue->grant_tx_page[id] = NULL; add_id_to_list(&queue->tx_skb_freelist, queue->tx_link, id); dev_kfree_skb_irq(skb); @@ -1021,7 +1019,7 @@ static int xennet_get_responses(struct netfront_queue *queue, * the backend driver. In future this should flag the bad * situation to the system controller to reboot the backend. */ - if (ref == GRANT_INVALID_REF) { + if (ref == INVALID_GRANT_REF) { if (net_ratelimit()) dev_warn(dev, "Bad rx response id %d.\n", rx->id); @@ -1390,7 +1388,7 @@ static void xennet_release_tx_bufs(struct netfront_queue *queue) gnttab_end_foreign_access(queue->grant_tx_ref[i], (unsigned long)page_address(queue->grant_tx_page[i])); queue->grant_tx_page[i] = NULL; - queue->grant_tx_ref[i] = GRANT_INVALID_REF; + queue->grant_tx_ref[i] = INVALID_GRANT_REF; add_id_to_list(&queue->tx_skb_freelist, queue->tx_link, i); dev_kfree_skb_irq(skb); } @@ -1411,7 +1409,7 @@ static void xennet_release_rx_bufs(struct netfront_queue *queue) continue; ref = queue->grant_rx_ref[id]; - if (ref == GRANT_INVALID_REF) + if (ref == INVALID_GRANT_REF) continue; page = skb_frag_page(&skb_shinfo(skb)->frags[0]); @@ -1422,7 +1420,7 @@ static void xennet_release_rx_bufs(struct netfront_queue *queue) get_page(page); gnttab_end_foreign_access(ref, (unsigned long)page_address(page)); - queue->grant_rx_ref[id] = GRANT_INVALID_REF; + queue->grant_rx_ref[id] = INVALID_GRANT_REF; kfree_skb(skb); } @@ -1761,7 +1759,7 @@ static int netfront_probe(struct xenbus_device *dev, static void xennet_end_access(int ref, void *page) { /* This frees the page as a side-effect */ - if (ref != GRANT_INVALID_REF) + if (ref != INVALID_GRANT_REF) gnttab_end_foreign_access(ref, (unsigned long)page); } @@ -1798,8 +1796,8 @@ static void xennet_disconnect_backend(struct netfront_info *info) xennet_end_access(queue->tx_ring_ref, queue->tx.sring); xennet_end_access(queue->rx_ring_ref, queue->rx.sring); - queue->tx_ring_ref = GRANT_INVALID_REF; - queue->rx_ring_ref = GRANT_INVALID_REF; + queue->tx_ring_ref = INVALID_GRANT_REF; + queue->rx_ring_ref = INVALID_GRANT_REF; queue->tx.sring = NULL; queue->rx.sring = NULL; @@ -1927,8 +1925,8 @@ static int setup_netfront(struct xenbus_device *dev, grant_ref_t gref; int err; - queue->tx_ring_ref = GRANT_INVALID_REF; - queue->rx_ring_ref = GRANT_INVALID_REF; + queue->tx_ring_ref = INVALID_GRANT_REF; + queue->rx_ring_ref = INVALID_GRANT_REF; queue->rx.sring = NULL; queue->tx.sring = NULL; @@ -1978,17 +1976,17 @@ static int setup_netfront(struct xenbus_device *dev, * granted pages because backend is not accessing it at this point. */ fail: - if (queue->rx_ring_ref != GRANT_INVALID_REF) { + if (queue->rx_ring_ref != INVALID_GRANT_REF) { gnttab_end_foreign_access(queue->rx_ring_ref, (unsigned long)rxs); - queue->rx_ring_ref = GRANT_INVALID_REF; + queue->rx_ring_ref = INVALID_GRANT_REF; } else { free_page((unsigned long)rxs); } - if (queue->tx_ring_ref != GRANT_INVALID_REF) { + if (queue->tx_ring_ref != INVALID_GRANT_REF) { gnttab_end_foreign_access(queue->tx_ring_ref, (unsigned long)txs); - queue->tx_ring_ref = GRANT_INVALID_REF; + queue->tx_ring_ref = INVALID_GRANT_REF; } else { free_page((unsigned long)txs); } @@ -2020,7 +2018,7 @@ static int xennet_init_queue(struct netfront_queue *queue) queue->tx_pend_queue = TX_LINK_NONE; for (i = 0; i < NET_TX_RING_SIZE; i++) { queue->tx_link[i] = i + 1; - queue->grant_tx_ref[i] = GRANT_INVALID_REF; + queue->grant_tx_ref[i] = INVALID_GRANT_REF; queue->grant_tx_page[i] = NULL; } queue->tx_link[NET_TX_RING_SIZE - 1] = TX_LINK_NONE; @@ -2028,7 +2026,7 @@ static int xennet_init_queue(struct netfront_queue *queue) /* Clear out rx_skbs */ for (i = 0; i < NET_RX_RING_SIZE; i++) { queue->rx_skbs[i] = NULL; - queue->grant_rx_ref[i] = GRANT_INVALID_REF; + queue->grant_rx_ref[i] = INVALID_GRANT_REF; } /* A grant for every tx ring slot */ From patchwork Thu May 5 08:16:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 12839148 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74CE6C4332F for ; Thu, 5 May 2022 08:17:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347639AbiEEIUx (ORCPT ); Thu, 5 May 2022 04:20:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54278 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347205AbiEEIU2 (ORCPT ); Thu, 5 May 2022 04:20:28 -0400 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B60F4969E; Thu, 5 May 2022 01:16:48 -0700 (PDT) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id AF08821871; Thu, 5 May 2022 08:16:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1651738606; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NOtG87ytP6Jh5NCmFkub01f6OJcOR1V96rvCSE+C7vA=; b=rgjzT37HswzabuodndnNoAIBhcaV9Yg3557K0ICQL/jluev/EsR89xbFX9PZQizcwOrRYi PnFyHFkLNAbjfh3DXofOl4AODH8F7yCKDyv5ztM0DC+tE08tp28mCbuOhVYzzZFRUHiS39 DfP4Pmh6ZCTaySti8VzrjWisi+VBWA8= Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 696DF13B11; Thu, 5 May 2022 08:16:46 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id EKRqGO6Hc2K1BwAAMHmgww (envelope-from ); Thu, 05 May 2022 08:16:46 +0000 From: Juergen Gross To: xen-devel@lists.xenproject.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Juergen Gross , Boris Ostrovsky , Stefano Stabellini , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Subject: [PATCH v3 11/21] xen: update ring.h Date: Thu, 5 May 2022 10:16:30 +0200 Message-Id: <20220505081640.17425-12-jgross@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220505081640.17425-1-jgross@suse.com> References: <20220505081640.17425-1-jgross@suse.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Update include/xen/interface/io/ring.h to its newest version. Switch the two improper use cases of RING_HAS_UNCONSUMED_RESPONSES() to XEN_RING_NR_UNCONSUMED_RESPONSES() in order to avoid the nasty XEN_RING_HAS_UNCONSUMED_IS_BOOL #define. Signed-off-by: Juergen Gross --- V2: - new patch --- drivers/net/xen-netfront.c | 4 ++-- include/xen/interface/io/ring.h | 19 ++++++++++++++----- 2 files changed, 16 insertions(+), 7 deletions(-) diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c index af3d3de7d9fa..966bee2a6902 100644 --- a/drivers/net/xen-netfront.c +++ b/drivers/net/xen-netfront.c @@ -866,7 +866,7 @@ static void xennet_set_rx_rsp_cons(struct netfront_queue *queue, RING_IDX val) spin_lock_irqsave(&queue->rx_cons_lock, flags); queue->rx.rsp_cons = val; - queue->rx_rsp_unconsumed = RING_HAS_UNCONSUMED_RESPONSES(&queue->rx); + queue->rx_rsp_unconsumed = XEN_RING_NR_UNCONSUMED_RESPONSES(&queue->rx); spin_unlock_irqrestore(&queue->rx_cons_lock, flags); } @@ -1498,7 +1498,7 @@ static bool xennet_handle_rx(struct netfront_queue *queue, unsigned int *eoi) return false; spin_lock_irqsave(&queue->rx_cons_lock, flags); - work_queued = RING_HAS_UNCONSUMED_RESPONSES(&queue->rx); + work_queued = XEN_RING_NR_UNCONSUMED_RESPONSES(&queue->rx); if (work_queued > queue->rx_rsp_unconsumed) { queue->rx_rsp_unconsumed = work_queued; *eoi = 0; diff --git a/include/xen/interface/io/ring.h b/include/xen/interface/io/ring.h index 2470ec45ebb2..ba4c4274b714 100644 --- a/include/xen/interface/io/ring.h +++ b/include/xen/interface/io/ring.h @@ -72,9 +72,8 @@ typedef unsigned int RING_IDX; * of the shared memory area (PAGE_SIZE, for instance). To initialise * the front half: * - * mytag_front_ring_t front_ring; - * SHARED_RING_INIT((mytag_sring_t *)shared_page); - * FRONT_RING_INIT(&front_ring, (mytag_sring_t *)shared_page, PAGE_SIZE); + * mytag_front_ring_t ring; + * XEN_FRONT_RING_INIT(&ring, (mytag_sring_t *)shared_page, PAGE_SIZE); * * Initializing the back follows similarly (note that only the front * initializes the shared ring): @@ -146,6 +145,11 @@ struct __name##_back_ring { \ #define FRONT_RING_INIT(_r, _s, __size) FRONT_RING_ATTACH(_r, _s, 0, __size) +#define XEN_FRONT_RING_INIT(r, s, size) do { \ + SHARED_RING_INIT(s); \ + FRONT_RING_INIT(r, s, size); \ +} while (0) + #define BACK_RING_ATTACH(_r, _s, _i, __size) do { \ (_r)->rsp_prod_pvt = (_i); \ (_r)->req_cons = (_i); \ @@ -170,16 +174,21 @@ struct __name##_back_ring { \ (RING_FREE_REQUESTS(_r) == 0) /* Test if there are outstanding messages to be processed on a ring. */ -#define RING_HAS_UNCONSUMED_RESPONSES(_r) \ +#define XEN_RING_NR_UNCONSUMED_RESPONSES(_r) \ ((_r)->sring->rsp_prod - (_r)->rsp_cons) -#define RING_HAS_UNCONSUMED_REQUESTS(_r) ({ \ +#define XEN_RING_NR_UNCONSUMED_REQUESTS(_r) ({ \ unsigned int req = (_r)->sring->req_prod - (_r)->req_cons; \ unsigned int rsp = RING_SIZE(_r) - \ ((_r)->req_cons - (_r)->rsp_prod_pvt); \ req < rsp ? req : rsp; \ }) +#define RING_HAS_UNCONSUMED_RESPONSES(_r) \ + (!!XEN_RING_NR_UNCONSUMED_RESPONSES(_r)) +#define RING_HAS_UNCONSUMED_REQUESTS(_r) \ + (!!XEN_RING_NR_UNCONSUMED_REQUESTS(_r)) + /* Direct access to individual ring elements, by index. */ #define RING_GET_REQUEST(_r, _idx) \ (&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].req)) From patchwork Thu May 5 08:16:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 12839146 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F587C433FE for ; Thu, 5 May 2022 08:17:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347444AbiEEIUq (ORCPT ); Thu, 5 May 2022 04:20:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53870 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347313AbiEEIU2 (ORCPT ); Thu, 5 May 2022 04:20:28 -0400 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EDE894990E; Thu, 5 May 2022 01:16:48 -0700 (PDT) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 887BE1F8A4; Thu, 5 May 2022 08:16:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1651738607; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=M6eg+rFc1MumAtFExaebcTMQjlP8JRCZ6MaTwnyhXTA=; b=PRWXK75Iy6glOCFP+BRMxBlVtN2eHikrFDyCXtCZLSYh/NlKtBEhjsX7rYe7cPq6dpCFly qvF85uuWPEVq+aRqbBWAp2SNiiVWB76NOqlqkde8eV21BTItvL14AtMhA40lzTs2DmU8P+ I7XPM8jeaQbxgQQTxLE8bS2JFHwffU0= Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 404C813B11; Thu, 5 May 2022 08:16:47 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id wBleDu+Hc2K1BwAAMHmgww (envelope-from ); Thu, 05 May 2022 08:16:47 +0000 From: Juergen Gross To: xen-devel@lists.xenproject.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Juergen Gross , Boris Ostrovsky , Stefano Stabellini , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Subject: [PATCH v3 14/21] xen/netfront: use xenbus_setup_ring() and xenbus_teardown_ring() Date: Thu, 5 May 2022 10:16:33 +0200 Message-Id: <20220505081640.17425-15-jgross@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220505081640.17425-1-jgross@suse.com> References: <20220505081640.17425-1-jgross@suse.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Simplify netfront's ring creation and removal via xenbus_setup_ring() and xenbus_teardown_ring(). Signed-off-by: Juergen Gross --- drivers/net/xen-netfront.c | 53 +++++++++----------------------------- 1 file changed, 12 insertions(+), 41 deletions(-) diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c index 966bee2a6902..65ab907aca5a 100644 --- a/drivers/net/xen-netfront.c +++ b/drivers/net/xen-netfront.c @@ -1921,8 +1921,7 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_queue *queue, unsigned int feature_split_evtchn) { struct xen_netif_tx_sring *txs; - struct xen_netif_rx_sring *rxs = NULL; - grant_ref_t gref; + struct xen_netif_rx_sring *rxs; int err; queue->tx_ring_ref = INVALID_GRANT_REF; @@ -1930,33 +1929,19 @@ static int setup_netfront(struct xenbus_device *dev, queue->rx.sring = NULL; queue->tx.sring = NULL; - txs = (struct xen_netif_tx_sring *)get_zeroed_page(GFP_NOIO | __GFP_HIGH); - if (!txs) { - err = -ENOMEM; - xenbus_dev_fatal(dev, err, "allocating tx ring page"); + err = xenbus_setup_ring(dev, GFP_NOIO | __GFP_HIGH, (void **)&txs, + 1, &queue->tx_ring_ref); + if (err) goto fail; - } - SHARED_RING_INIT(txs); - FRONT_RING_INIT(&queue->tx, txs, XEN_PAGE_SIZE); - err = xenbus_grant_ring(dev, txs, 1, &gref); - if (err < 0) - goto fail; - queue->tx_ring_ref = gref; + XEN_FRONT_RING_INIT(&queue->tx, txs, XEN_PAGE_SIZE); - rxs = (struct xen_netif_rx_sring *)get_zeroed_page(GFP_NOIO | __GFP_HIGH); - if (!rxs) { - err = -ENOMEM; - xenbus_dev_fatal(dev, err, "allocating rx ring page"); + err = xenbus_setup_ring(dev, GFP_NOIO | __GFP_HIGH, (void **)&rxs, + 1, &queue->rx_ring_ref); + if (err) goto fail; - } - SHARED_RING_INIT(rxs); - FRONT_RING_INIT(&queue->rx, rxs, XEN_PAGE_SIZE); - err = xenbus_grant_ring(dev, rxs, 1, &gref); - if (err < 0) - goto fail; - queue->rx_ring_ref = gref; + XEN_FRONT_RING_INIT(&queue->rx, rxs, XEN_PAGE_SIZE); if (feature_split_evtchn) err = setup_netfront_split(queue); @@ -1972,24 +1957,10 @@ static int setup_netfront(struct xenbus_device *dev, return 0; - /* If we fail to setup netfront, it is safe to just revoke access to - * granted pages because backend is not accessing it at this point. - */ fail: - if (queue->rx_ring_ref != INVALID_GRANT_REF) { - gnttab_end_foreign_access(queue->rx_ring_ref, - (unsigned long)rxs); - queue->rx_ring_ref = INVALID_GRANT_REF; - } else { - free_page((unsigned long)rxs); - } - if (queue->tx_ring_ref != INVALID_GRANT_REF) { - gnttab_end_foreign_access(queue->tx_ring_ref, - (unsigned long)txs); - queue->tx_ring_ref = INVALID_GRANT_REF; - } else { - free_page((unsigned long)txs); - } + xenbus_teardown_ring((void **)&queue->rx.sring, 1, &queue->rx_ring_ref); + xenbus_teardown_ring((void **)&queue->tx.sring, 1, &queue->tx_ring_ref); + return err; }