From patchwork Fri Oct 4 05:16:38 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Dumazet X-Patchwork-Id: 2987081 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 2B1F89F245 for ; Fri, 4 Oct 2013 05:17:18 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 591C22035C for ; Fri, 4 Oct 2013 05:17:17 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 631E620340 for ; Fri, 4 Oct 2013 05:17:16 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1VRxlD-0001pl-66; Fri, 04 Oct 2013 05:17:07 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1VRxlA-0000vM-ON; Fri, 04 Oct 2013 05:17:04 +0000 Received: from mail-pa0-x22e.google.com ([2607:f8b0:400e:c03::22e]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1VRxl8-0000uG-4d for linux-arm-kernel@lists.infradead.org; Fri, 04 Oct 2013 05:17:02 +0000 Received: by mail-pa0-f46.google.com with SMTP id fa1so3617545pad.5 for ; Thu, 03 Oct 2013 22:16:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:subject:from:to:cc:date:in-reply-to:references :content-type:content-transfer-encoding:mime-version; bh=aGO1wOVgoZ2XpPLfbiSM1NXrfkT3QIc8mhEE2pwoois=; b=N/brYWjxjTe4F78A2cgvRfLNCfUYWPmb7sWmk5ktOwMuZ56ju7kfMOn8BHBLgVfwR4 FIvVXb76sGqssAQMZ0mN2V0HflX48+nGh6pJ+/0yBJA2Pv+RlWzOsd2ik4kfb4xuoWX4 aMUK/EA4d3/wGHVkMctGaJlpFHEeRe47CnK0AnETSIDL5Bz8fYrkATlCj51y9l8QNDb9 nzKB8yUMvhix/rPudHT0rXRoe8S37a07wcwIb6sw8BIfpdlaEBgEoJ4Jn++mkDief17A cja4EYDQ05/MYJ0DL4Rnr1Js7vnlFI9z+4u+CnS2wmma/RKP8wKJFx9VbmLr1auP/yVy x7YA== X-Received: by 10.67.3.34 with SMTP id bt2mr13598954pad.3.1380863799018; Thu, 03 Oct 2013 22:16:39 -0700 (PDT) Received: from [172.19.253.250] ([172.19.253.250]) by mx.google.com with ESMTPSA id bp4sm9473196pbb.42.1969.12.31.16.00.00 (version=SSLv3 cipher=RC4-SHA bits=128/128); Thu, 03 Oct 2013 22:16:38 -0700 (PDT) Message-ID: <1380863798.3564.12.camel@edumazet-glaptop.roam.corp.google.com> Subject: Re: [PATCH v3 net-next] fix unsafe set_memory_rw from softirq From: Eric Dumazet To: Alexei Starovoitov Date: Thu, 03 Oct 2013 22:16:38 -0700 In-Reply-To: <1380859875-31025-1-git-send-email-ast@plumgrid.com> References: <1380859875-31025-1-git-send-email-ast@plumgrid.com> X-Mailer: Evolution 3.2.3-0ubuntu6 Mime-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20131004_011702_277193_C4EBB93F X-CRM114-Status: GOOD ( 18.42 ) X-Spam-Score: -2.0 (--) Cc: linux-s390@vger.kernel.org, netdev@vger.kernel.org, Eric Dumazet , Daniel Borkmann , linuxppc-dev@lists.ozlabs.org, "David S. Miller" , linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.7 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Thu, 2013-10-03 at 21:11 -0700, Alexei Starovoitov wrote: > diff --git a/include/linux/filter.h b/include/linux/filter.h > index a6ac848..5d66cd9 100644 > --- a/include/linux/filter.h > +++ b/include/linux/filter.h > @@ -25,15 +25,20 @@ struct sk_filter > { > atomic_t refcnt; > unsigned int len; /* Number of filter blocks */ > + struct rcu_head rcu; > unsigned int (*bpf_func)(const struct sk_buff *skb, > const struct sock_filter *filter); > - struct rcu_head rcu; > + /* insns start right after bpf_func, so that sk_run_filter() fetches > + * first insn from the same cache line that was used to call into > + * sk_run_filter() > + */ > struct sock_filter insns[0]; > }; > > static inline unsigned int sk_filter_len(const struct sk_filter *fp) > { > - return fp->len * sizeof(struct sock_filter) + sizeof(*fp); > + return max(fp->len * sizeof(struct sock_filter), > + sizeof(struct work_struct)) + sizeof(*fp); > } I would use for include/linux/filter.h this (untested) diff : (Note the include ) I also remove your comment about cache lines, since there is nothing to align stuff on a cache line boundary. This way, you can use sk_filter_size(fp, fprog->len) instead of doing the max() games in sk_attach_filter() and sk_unattached_filter_create() Other than that, I think your patch is fine. diff --git a/include/linux/filter.h b/include/linux/filter.h index a6ac848..281b05c 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -6,6 +6,7 @@ #include #include +#include #include #ifdef CONFIG_COMPAT @@ -25,15 +26,20 @@ struct sk_filter { atomic_t refcnt; unsigned int len; /* Number of filter blocks */ + struct rcu_head rcu; unsigned int (*bpf_func)(const struct sk_buff *skb, const struct sock_filter *filter); - struct rcu_head rcu; - struct sock_filter insns[0]; + union { + struct work_struct work; + struct sock_filter insns[0]; + }; }; -static inline unsigned int sk_filter_len(const struct sk_filter *fp) +static inline unsigned int sk_filter_size(const struct sk_filter *fp, + unsigned int proglen) { - return fp->len * sizeof(struct sock_filter) + sizeof(*fp); + return max(sizeof(*fp), + offsetof(struct sk_filter, insns[proglen])); } extern int sk_filter(struct sock *sk, struct sk_buff *skb);