From patchwork Wed Aug 16 10:49:11 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 9903433 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 626A1600CA for ; Wed, 16 Aug 2017 10:51:35 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 535E128994 for ; Wed, 16 Aug 2017 10:51:35 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 47E81289AE; Wed, 16 Aug 2017 10:51:35 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 5502328994 for ; Wed, 16 Aug 2017 10:51:34 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dhvsz-0001lL-R3; Wed, 16 Aug 2017 10:49:17 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dhvsy-0001l9-VR for xen-devel@lists.xenproject.org; Wed, 16 Aug 2017 10:49:17 +0000 Received: from [193.109.254.147] by server-1.bemta-6.messagelabs.com id 92/CE-03765-C2324995; Wed, 16 Aug 2017 10:49:16 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrAIsWRWlGSWpSXmKPExsXS6fjDS1dbeUq kwfsXfBbft0xmcmD0OPzhCksAYxRrZl5SfkUCa8bWN83sBY01FXc6AxsYz4R3MXJyCAnkSax8 /IQFxOYVsJM4fv8CK4gtIWAocXrhTbA4i4CqxNxFfxhBbDYBdYm2Z9uBajg4RAQMJM4dTepi5 OJgFvjFKDHnwhSwemEBL4kzd/4xQswvkni6cw8biM0pYC9xe8drsF5eAUGJvzuEQcLMAloSD3 /dYoGwtSWWLXzNDFLCLCAtsfwfxwRGvlkIDbOQNMxC0jALoWEBI8sqRo3i1KKy1CJdIzO9pKL M9IyS3MTMHF1DAzO93NTi4sT01JzEpGK95PzcTYzA0GMAgh2MZxYEHmKU5GBSEuVddHZSpBBf Un5KZUZicUZ8UWlOavEhRhkODiUJXgmlKZFCgkWp6akVaZk5wCiASUtw8CiJ8FYrAKV5iwsSc 4sz0yFSpxh1OV5N+P+NSYglLz8vVUqc94EiUJEASFFGaR7cCFhEXmKUlRLmZQQ6SoinILUoN7 MEVf4VozgHo5Iw71eQKTyZeSVwm14BHcEEdMSV9kkgR5QkIqSkGhjFjTNbr+cXMkVwMtYq/Zb gvM26/lBm5DaFs60x1TuiDhjtXN3Ulrto+kTZFylHX/6b/2LTobqs3af1A07/i+jfWMb2Y/4h 6xWPa03/sJk+Nmf5sWCvTHiHU9/9I4+ss392NBW9TvKeuzf/wq7QYwbWD99+v3/BiOnibFZJ7 30fXWWn2XPbZExQYinOSDTUYi4qTgQABcHA+cMCAAA= X-Env-Sender: JBeulich@suse.com X-Msg-Ref: server-14.tower-27.messagelabs.com!1502880553!99607442!1 X-Originating-IP: [137.65.248.74] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 9.4.45; banners=-,-,- X-VirusChecked: Checked Received: (qmail 30113 invoked from network); 16 Aug 2017 10:49:14 -0000 Received: from prv-mh.provo.novell.com (HELO prv-mh.provo.novell.com) (137.65.248.74) by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 16 Aug 2017 10:49:14 -0000 Received: from INET-PRV-MTA by prv-mh.provo.novell.com with Novell_GroupWise; Wed, 16 Aug 2017 04:49:12 -0600 Message-Id: <59943F470200007800170370@prv-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 14.2.2 Date: Wed, 16 Aug 2017 04:49:11 -0600 From: "Jan Beulich" To: "xen-devel" References: <59943AC70200007800170343@prv-mh.provo.novell.com> <59943AC70200007800170343@prv-mh.provo.novell.com> In-Reply-To: <59943AC70200007800170343@prv-mh.provo.novell.com> Mime-Version: 1.0 Content-Disposition: inline Cc: Stefano Stabellini , Wei Liu , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan Subject: [Xen-devel] [PATCH v2 3/7] gnttab: drop pointless leading double underscores X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP They're violating name space rules, and we don't really need them. When followed by "gnttab_", also drop that. Signed-by: Jan Beulich Reviewed-by: Andrew Cooper --- v2: Re-base. Minor formatting adjustment. --- a/xen/common/grant_table.c +++ b/xen/common/grant_table.c @@ -252,8 +252,9 @@ static inline void active_entry_release( If rc == GNTST_okay, *page contains the page struct with a ref taken. Caller must do put_page(*page). If any error, *page = NULL, *frame = INVALID_MFN, no ref taken. */ -static int __get_paged_frame(unsigned long gfn, unsigned long *frame, struct page_info **page, - int readonly, struct domain *rd) +static int get_paged_frame(unsigned long gfn, unsigned long *frame, + struct page_info **page, bool readonly, + struct domain *rd) { int rc = GNTST_okay; #if defined(P2M_PAGED_TYPES) || defined(P2M_SHARED_TYPES) @@ -319,9 +320,7 @@ double_gt_unlock(struct grant_table *lgt #define INVALID_MAPTRACK_HANDLE UINT_MAX static inline grant_handle_t -__get_maptrack_handle( - struct grant_table *t, - struct vcpu *v) +_get_maptrack_handle(struct grant_table *t, struct vcpu *v) { unsigned int head, next, prev_head; @@ -380,7 +379,7 @@ static grant_handle_t steal_maptrack_han { grant_handle_t handle; - handle = __get_maptrack_handle(t, currd->vcpu[i]); + handle = _get_maptrack_handle(t, currd->vcpu[i]); if ( handle != INVALID_MAPTRACK_HANDLE ) { maptrack_entry(t, handle).vcpu = curr->vcpu_id; @@ -434,7 +433,7 @@ get_maptrack_handle( grant_handle_t handle; struct grant_mapping *new_mt = NULL; - handle = __get_maptrack_handle(lgt, curr); + handle = _get_maptrack_handle(lgt, curr); if ( likely(handle != INVALID_MAPTRACK_HANDLE) ) return handle; @@ -789,7 +788,7 @@ static unsigned int mapkind( * update, as indicated by the GNTMAP_contains_pte flag. */ static void -__gnttab_map_grant_ref( +map_grant_ref( struct gnttab_map_grant_ref *op) { struct domain *ld, *rd, *owner = NULL; @@ -888,8 +887,8 @@ __gnttab_map_grant_ref( shared_entry_v1(rgt, op->ref).frame : shared_entry_v2(rgt, op->ref).full_page.frame; - rc = __get_paged_frame(gfn, &frame, &pg, - !!(op->flags & GNTMAP_readonly), rd); + rc = get_paged_frame(gfn, &frame, &pg, + op->flags & GNTMAP_readonly, rd); if ( rc != GNTST_okay ) goto unlock_out_clear; act->gfn = gfn; @@ -919,7 +918,7 @@ __gnttab_map_grant_ref( active_entry_release(act); grant_read_unlock(rgt); - /* pg may be set, with a refcount included, from __get_paged_frame */ + /* pg may be set, with a refcount included, from get_paged_frame(). */ if ( !pg ) { pg = mfn_valid(_mfn(frame)) ? mfn_to_page(frame) : NULL; @@ -1130,7 +1129,7 @@ gnttab_map_grant_ref( if ( unlikely(__copy_from_guest_offset(&op, uop, i, 1)) ) return -EFAULT; - __gnttab_map_grant_ref(&op); + map_grant_ref(&op); if ( unlikely(__copy_to_guest_offset(uop, i, &op, 1)) ) return -EFAULT; @@ -1140,7 +1139,7 @@ gnttab_map_grant_ref( } static void -__gnttab_unmap_common( +unmap_common( struct gnttab_unmap_common *op) { domid_t dom; @@ -1200,8 +1199,8 @@ __gnttab_unmap_common( /* * This ought to be impossible, as such a mapping should not have * been established (see the nr_grant_entries(rgt) bounds check in - * __gnttab_map_grant_ref()). Doing this check only in - * __gnttab_unmap_common_complete() - as it used to be done - would, + * gnttab_map_grant_ref()). Doing this check only in + * gnttab_unmap_common_complete() - as it used to be done - would, * however, be too late. */ rc = GNTST_bad_gntref; @@ -1315,7 +1314,7 @@ __gnttab_unmap_common( } static void -__gnttab_unmap_common_complete(struct gnttab_unmap_common *op) +unmap_common_complete(struct gnttab_unmap_common *op) { struct domain *ld, *rd = op->rd; struct grant_table *rgt; @@ -1326,7 +1325,7 @@ __gnttab_unmap_common_complete(struct gn if ( !op->done ) { - /* __gntab_unmap_common() didn't do anything - nothing to complete. */ + /* unmap_common() didn't do anything - nothing to complete. */ return; } @@ -1395,7 +1394,7 @@ __gnttab_unmap_common_complete(struct gn } static void -__gnttab_unmap_grant_ref( +unmap_grant_ref( struct gnttab_unmap_grant_ref *op, struct gnttab_unmap_common *common) { @@ -1409,7 +1408,7 @@ __gnttab_unmap_grant_ref( common->rd = NULL; common->frame = 0; - __gnttab_unmap_common(common); + unmap_common(common); op->status = common->status; } @@ -1431,7 +1430,7 @@ gnttab_unmap_grant_ref( { if ( unlikely(__copy_from_guest(&op, uop, 1)) ) goto fault; - __gnttab_unmap_grant_ref(&op, &(common[i])); + unmap_grant_ref(&op, &common[i]); ++partial_done; if ( unlikely(__copy_field_to_guest(uop, &op, status)) ) goto fault; @@ -1441,7 +1440,7 @@ gnttab_unmap_grant_ref( gnttab_flush_tlb(current->domain); for ( i = 0; i < partial_done; i++ ) - __gnttab_unmap_common_complete(&(common[i])); + unmap_common_complete(&common[i]); count -= c; done += c; @@ -1456,12 +1455,12 @@ fault: gnttab_flush_tlb(current->domain); for ( i = 0; i < partial_done; i++ ) - __gnttab_unmap_common_complete(&(common[i])); + unmap_common_complete(&common[i]); return -EFAULT; } static void -__gnttab_unmap_and_replace( +unmap_and_replace( struct gnttab_unmap_and_replace *op, struct gnttab_unmap_common *common) { @@ -1475,7 +1474,7 @@ __gnttab_unmap_and_replace( common->rd = NULL; common->frame = 0; - __gnttab_unmap_common(common); + unmap_common(common); op->status = common->status; } @@ -1496,7 +1495,7 @@ gnttab_unmap_and_replace( { if ( unlikely(__copy_from_guest(&op, uop, 1)) ) goto fault; - __gnttab_unmap_and_replace(&op, &(common[i])); + unmap_and_replace(&op, &common[i]); ++partial_done; if ( unlikely(__copy_field_to_guest(uop, &op, status)) ) goto fault; @@ -1506,7 +1505,7 @@ gnttab_unmap_and_replace( gnttab_flush_tlb(current->domain); for ( i = 0; i < partial_done; i++ ) - __gnttab_unmap_common_complete(&(common[i])); + unmap_common_complete(&common[i]); count -= c; done += c; @@ -1521,7 +1520,7 @@ fault: gnttab_flush_tlb(current->domain); for ( i = 0; i < partial_done; i++ ) - __gnttab_unmap_common_complete(&(common[i])); + unmap_common_complete(&common[i]); return -EFAULT; } @@ -1872,9 +1871,10 @@ gnttab_transfer( #ifdef CONFIG_X86 { - p2m_type_t __p2mt; - mfn = mfn_x(get_gfn_unshare(d, gop.mfn, &__p2mt)); - if ( p2m_is_shared(__p2mt) || !p2m_is_valid(__p2mt) ) + p2m_type_t p2mt; + + mfn = mfn_x(get_gfn_unshare(d, gop.mfn, &p2mt)); + if ( p2m_is_shared(p2mt) || !p2m_is_valid(p2mt) ) mfn = mfn_x(INVALID_MFN); } #else @@ -2061,10 +2061,12 @@ gnttab_transfer( return 0; } -/* Undo __acquire_grant_for_copy. Again, this has no effect on page - type and reference counts. */ +/* + * Undo acquire_grant_for_copy(). This has no effect on page type and + * reference counts. + */ static void -__release_grant_for_copy( +release_grant_for_copy( struct domain *rd, grant_ref_t gref, bool readonly) { struct grant_table *rgt = rd->grant_table; @@ -2119,7 +2121,7 @@ __release_grant_for_copy( * Recursive call, but it is bounded (acquire permits only a single * level of transitivity), so it's okay. */ - __release_grant_for_copy(td, trans_gref, readonly); + release_grant_for_copy(td, trans_gref, readonly); rcu_unlock_domain(td); } @@ -2130,8 +2132,8 @@ __release_grant_for_copy( under the domain's grant table lock. */ /* Only safe on transitive grants. Even then, note that we don't attempt to drop any pin on the referent grant. */ -static void __fixup_status_for_copy_pin(const struct active_grant_entry *act, - uint16_t *status) +static void fixup_status_for_copy_pin(const struct active_grant_entry *act, + uint16_t *status) { if ( !(act->pin & (GNTPIN_hstw_mask | GNTPIN_devw_mask)) ) gnttab_clear_flag(_GTF_writing, status); @@ -2145,7 +2147,7 @@ static void __fixup_status_for_copy_pin( take one ref count on the target page, stored in *page. If there is any error, *page = NULL, no ref taken. */ static int -__acquire_grant_for_copy( +acquire_grant_for_copy( struct domain *rd, grant_ref_t gref, domid_t ldom, bool readonly, unsigned long *frame, struct page_info **page, uint16_t *page_off, uint16_t *length, bool allow_transitive) @@ -2229,24 +2231,24 @@ __acquire_grant_for_copy( trans_domid); /* - * __acquire_grant_for_copy() could take the lock on the + * acquire_grant_for_copy() could take the lock on the * remote table (if rd == td), so we have to drop the lock * here and reacquire. */ active_entry_release(act); grant_read_unlock(rgt); - rc = __acquire_grant_for_copy(td, trans_gref, rd->domain_id, - readonly, &grant_frame, page, - &trans_page_off, &trans_length, - false); + rc = acquire_grant_for_copy(td, trans_gref, rd->domain_id, + readonly, &grant_frame, page, + &trans_page_off, &trans_length, + false); grant_read_lock(rgt); act = active_entry_acquire(rgt, gref); if ( rc != GNTST_okay ) { - __fixup_status_for_copy_pin(act, status); + fixup_status_for_copy_pin(act, status); rcu_unlock_domain(td); active_entry_release(act); grant_read_unlock(rgt); @@ -2267,8 +2269,8 @@ __acquire_grant_for_copy( act->trans_gref != trans_gref || !act->is_sub_page)) ) { - __release_grant_for_copy(td, trans_gref, readonly); - __fixup_status_for_copy_pin(act, status); + release_grant_for_copy(td, trans_gref, readonly); + fixup_status_for_copy_pin(act, status); rcu_unlock_domain(td); active_entry_release(act); grant_read_unlock(rgt); @@ -2308,7 +2310,7 @@ __acquire_grant_for_copy( { unsigned long gfn = shared_entry_v1(rgt, gref).frame; - rc = __get_paged_frame(gfn, &grant_frame, page, readonly, rd); + rc = get_paged_frame(gfn, &grant_frame, page, readonly, rd); if ( rc != GNTST_okay ) goto unlock_out_clear; act->gfn = gfn; @@ -2318,7 +2320,8 @@ __acquire_grant_for_copy( } else if ( !(sha2->hdr.flags & GTF_sub_page) ) { - rc = __get_paged_frame(sha2->full_page.frame, &grant_frame, page, readonly, rd); + rc = get_paged_frame(sha2->full_page.frame, &grant_frame, page, + readonly, rd); if ( rc != GNTST_okay ) goto unlock_out_clear; act->gfn = sha2->full_page.frame; @@ -2328,7 +2331,8 @@ __acquire_grant_for_copy( } else { - rc = __get_paged_frame(sha2->sub_page.frame, &grant_frame, page, readonly, rd); + rc = get_paged_frame(sha2->sub_page.frame, &grant_frame, page, + readonly, rd); if ( rc != GNTST_okay ) goto unlock_out_clear; act->gfn = sha2->sub_page.frame; @@ -2481,7 +2485,7 @@ static void gnttab_copy_release_buf(stru } if ( buf->have_grant ) { - __release_grant_for_copy(buf->domain, buf->ptr.u.ref, buf->read_only); + release_grant_for_copy(buf->domain, buf->ptr.u.ref, buf->read_only); buf->have_grant = 0; } } @@ -2497,11 +2501,11 @@ static int gnttab_copy_claim_buf(const s if ( op->flags & gref_flag ) { - rc = __acquire_grant_for_copy(buf->domain, ptr->u.ref, - current->domain->domain_id, - buf->read_only, - &buf->frame, &buf->page, - &buf->ptr.offset, &buf->len, true); + rc = acquire_grant_for_copy(buf->domain, ptr->u.ref, + current->domain->domain_id, + buf->read_only, + &buf->frame, &buf->page, + &buf->ptr.offset, &buf->len, true); if ( rc != GNTST_okay ) goto out; buf->ptr.u.ref = ptr->u.ref; @@ -2509,8 +2513,8 @@ static int gnttab_copy_claim_buf(const s } else { - rc = __get_paged_frame(ptr->u.gmfn, &buf->frame, &buf->page, - buf->read_only, buf->domain); + rc = get_paged_frame(ptr->u.gmfn, &buf->frame, &buf->page, + buf->read_only, buf->domain); if ( rc != GNTST_okay ) PIN_FAIL(out, rc, "source frame %"PRI_xen_pfn" invalid.\n", ptr->u.gmfn); @@ -2931,7 +2935,7 @@ gnttab_get_version(XEN_GUEST_HANDLE_PARA } static s16 -__gnttab_swap_grant_ref(grant_ref_t ref_a, grant_ref_t ref_b) +swap_grant_ref(grant_ref_t ref_a, grant_ref_t ref_b) { struct domain *d = rcu_lock_current_domain(); struct grant_table *gt = d->grant_table; @@ -3007,7 +3011,7 @@ gnttab_swap_grant_ref(XEN_GUEST_HANDLE_P return i; if ( unlikely(__copy_from_guest(&op, uop, 1)) ) return -EFAULT; - op.status = __gnttab_swap_grant_ref(op.ref_a, op.ref_b); + op.status = swap_grant_ref(op.ref_a, op.ref_b); if ( unlikely(__copy_field_to_guest(uop, &op, status)) ) return -EFAULT; guest_handle_add_offset(uop, 1); @@ -3015,8 +3019,7 @@ gnttab_swap_grant_ref(XEN_GUEST_HANDLE_P return 0; } -static int __gnttab_cache_flush(gnttab_cache_flush_t *cflush, - grant_ref_t *cur_ref) +static int cache_flush(gnttab_cache_flush_t *cflush, grant_ref_t *cur_ref) { struct domain *d, *owner; struct page_info *page; @@ -3106,7 +3109,7 @@ gnttab_cache_flush(XEN_GUEST_HANDLE_PARA return -EFAULT; for ( ; ; ) { - int ret = __gnttab_cache_flush(&op, cur_ref); + int ret = cache_flush(&op, cur_ref); if ( ret < 0 ) return ret;