From patchwork Mon May 16 19:23:30 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maya Erez X-Patchwork-Id: 9105441 X-Patchwork-Delegate: kvalo@adurom.com Return-Path: X-Original-To: patchwork-linux-wireless@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id D1020BF29F for ; Mon, 16 May 2016 19:24:15 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 057F9202AE for ; Mon, 16 May 2016 19:24:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A085720251 for ; Mon, 16 May 2016 19:24:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754600AbcEPTYM (ORCPT ); Mon, 16 May 2016 15:24:12 -0400 Received: from wolverine02.qualcomm.com ([199.106.114.251]:18835 "EHLO wolverine02.qualcomm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754515AbcEPTYK (ORCPT ); Mon, 16 May 2016 15:24:10 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=qca.qualcomm.com; i=@qca.qualcomm.com; q=dns/txt; s=qcdkim; t=1463426650; x=1494962650; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=OtTlJuhrhb5GIfZVyOr6rxA8aQRkB+Csh2zuyMg1uoo=; b=bVv4DnY48NGOpFnRdSP/275bQnd5+jn5PZkvE/YkbYj+9tyqCToHUKRC uHCRKjCZFEc7tqGvENKpsOO6PgO47lfTyK2y0gh2Uftl3PeR0oI4pNeBj NENR+hnuVaLwDjsVE/spOFd4h+3K8uVgmF7GDJ9/8SrhCKAIMFyprbkmo E=; X-IronPort-AV: E=Sophos;i="5.26,627,1459839600"; d="scan'208";a="288633493" Received: from ironmsg02-l-new.qualcomm.com (HELO ironmsg02-L.qualcomm.com) ([10.53.140.109]) by wolverine02.qualcomm.com with ESMTP/TLS/DHE-RSA-AES256-SHA; 16 May 2016 12:23:47 -0700 X-IronPort-AV: E=McAfee;i="5700,7163,8167"; a="699264671" Received: from lx-merez.mea.qualcomm.com ([10.18.173.103]) by ironmsg02-L.qualcomm.com with ESMTP; 16 May 2016 12:23:45 -0700 From: Maya Erez To: Kalle Valo Cc: Maya Erez , linux-wireless@vger.kernel.org, wil6210@qca.qualcomm.com Subject: [PATCH 1/6] wil6210: fix race conditions between TX send and completion Date: Mon, 16 May 2016 22:23:30 +0300 Message-Id: <1463426615-15523-2-git-send-email-qca_merez@qca.qualcomm.com> X-Mailer: git-send-email 1.8.5.2 In-Reply-To: <1463426615-15523-1-git-send-email-qca_merez@qca.qualcomm.com> References: <1463426615-15523-1-git-send-email-qca_merez@qca.qualcomm.com> Sender: linux-wireless-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org X-Spam-Status: No, score=-8.2 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP There are 2 possible race conditions, both are solved by addition of memory barrier: 1. wil_tx_complete reads the swhead to determine if the vring is empty. In case the swhead was updated before the descriptor update was performed in __wil_tx_vring/__wil_tx_vring_tso, the completion loop will not end and as the DU bit may still be set from a previous run, this skb can be handled as completed before it was sent, which will lead to double free of the same SKB. 2. __wil_tx_vring/__wil_tx_vring_tso calculate the number of available descriptors according to the swtail. In case the swtail is updated before memset of ctx to zero is completed, we can handle this descriptor while later on ctx is zeroed. Signed-off-by: Maya Erez --- drivers/net/wireless/ath/wil6210/txrx.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/drivers/net/wireless/ath/wil6210/txrx.c b/drivers/net/wireless/ath/wil6210/txrx.c index a4e4379..fa6ea24 100644 --- a/drivers/net/wireless/ath/wil6210/txrx.c +++ b/drivers/net/wireless/ath/wil6210/txrx.c @@ -1551,6 +1551,13 @@ static int __wil_tx_vring_tso(struct wil6210_priv *wil, struct vring *vring, vring_index, used, used + descs_used); } + /* Make sure to advance the head only after descriptor update is done. + * This will prevent a race condition where the completion thread + * will see the DU bit set from previous run and will handle the + * skb before it was completed. + */ + wmb(); + /* advance swhead */ wil_vring_advance_head(vring, descs_used); wil_dbg_txrx(wil, "TSO: Tx swhead %d -> %d\n", swhead, vring->swhead); @@ -1691,6 +1698,13 @@ static int __wil_tx_vring(struct wil6210_priv *wil, struct vring *vring, vring_index, used, used + nr_frags + 1); } + /* Make sure to advance the head only after descriptor update is done. + * This will prevent a race condition where the completion thread + * will see the DU bit set from previous run and will handle the + * skb before it was completed. + */ + wmb(); + /* advance swhead */ wil_vring_advance_head(vring, nr_frags + 1); wil_dbg_txrx(wil, "Tx[%2d] swhead %d -> %d\n", vring_index, swhead, @@ -1914,6 +1928,12 @@ int wil_tx_complete(struct wil6210_priv *wil, int ringid) wil_consume_skb(skb, d->dma.error == 0); } memset(ctx, 0, sizeof(*ctx)); + /* Make sure the ctx is zeroed before updating the tail + * to prevent a case where wil_tx_vring will see + * this descriptor as used and handle it before ctx zero + * is completed. + */ + wmb(); /* There is no need to touch HW descriptor: * - ststus bit TX_DMA_STATUS_DU is set by design, * so hardware will not try to process this desc.,