From patchwork Wed Dec 2 01:38:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Rothwell X-Patchwork-Id: 11944469 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0BFCC64E8A for ; Wed, 2 Dec 2020 01:39:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9239B21D7E for ; Wed, 2 Dec 2020 01:39:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727846AbgLBBjC (ORCPT ); Tue, 1 Dec 2020 20:39:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36930 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727096AbgLBBjB (ORCPT ); Tue, 1 Dec 2020 20:39:01 -0500 Received: from ozlabs.org (ozlabs.org [IPv6:2401:3900:2:1::2]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3FCB5C0613CF; Tue, 1 Dec 2020 17:38:21 -0800 (PST) Received: from authenticated.ozlabs.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-256) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mail.ozlabs.org (Postfix) with ESMTPSA id 4Cm1nk12lyz9sVl; Wed, 2 Dec 2020 12:38:17 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=canb.auug.org.au; s=201702; t=1606873099; bh=SbDrqvHmNU+7sLlQ9flvOc2UKJ3kLdEy5wt51QQ+3DY=; h=Date:From:To:Cc:Subject:From; b=fhcmCTOoHg3PrG5J06IrIgyy3nmG863Ke3dEyNoW0YvbY3UnCVBqq/uekuCBzqIF8 yswQEUYSfQzmFhYi90W3MvFCR46w5t74uEJn9HdDFMwWWWrLKBx0XDdxE0/ivAiQkG M9vehRFo/31P2SzVo0SAPyhbTUpjNMeNslY3QjdOaPNQsnX4zejgDKkNduWZXEtyRn tUalso9Ox7usdBY+Bh2Utzjsdu2l3bnJbI/AnaMucxQS59rc8siqqXWpmpdzW7TOFz F/4H8KePLIgOJ9vdUhM+7RjG1GIsNk0oAYJ7OypDDoKFooIfbHupb64GUT9nMUcc8l opkrU5+FZHYcA== Date: Wed, 2 Dec 2020 12:38:16 +1100 From: Stephen Rothwell To: David Miller , Networking , Jakub Kicinski Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Daniel Borkmann , Jesper Dangaard Brouer , Lorenzo Bianconi , Linux Kernel Mailing List , Linux Next Mailing List Subject: linux-next: build failure after merge of the net-next tree Message-ID: <20201202123816.5f3a9743@canb.auug.org.au> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Hi all, After merging the net-next tree, today's linux-next build (powerpc ppc64_defconfig) failed like this: net/core/xdp.c: In function 'xdp_return_frame_bulk': net/core/xdp.c:417:3: error: too few arguments to function '__xdp_return' 417 | __xdp_return(xdpf->data, &xdpf->mem, false); | ^~~~~~~~~~~~ net/core/xdp.c:340:13: note: declared here 340 | static void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct, | ^~~~~~~~~~~~ Caused by commit 8965398713d8 ("net: xdp: Introduce bulking for xdp tx return path") interacting with commit ed1182dc004d ("xdp: Handle MEM_TYPE_XSK_BUFF_POOL correctly in xdp_return_buff()") from the bpf tree. I applied the following merge fix patch. From: Stephen Rothwell Date: Wed, 2 Dec 2020 12:33:14 +1100 Subject: [PATCH] fix up for "xdp: Handle MEM_TYPE_XSK_BUFF_POOL correctly in xdp_return_buff()" Signed-off-by: Stephen Rothwell --- net/core/xdp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/core/xdp.c b/net/core/xdp.c index f2cdacd81d43..3100f9711eae 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -414,7 +414,7 @@ void xdp_return_frame_bulk(struct xdp_frame *xdpf, struct xdp_mem_allocator *xa; if (mem->type != MEM_TYPE_PAGE_POOL) { - __xdp_return(xdpf->data, &xdpf->mem, false); + __xdp_return(xdpf->data, &xdpf->mem, false, NULL); return; }