From patchwork Wed May 4 21:42:53 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Denker X-Patchwork-Id: 9019021 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Original-To: patchwork-linux-crypto@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 316169F1D3 for ; Wed, 4 May 2016 21:49:46 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id F39EA2026F for ; Wed, 4 May 2016 21:49:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9FF39203ED for ; Wed, 4 May 2016 21:49:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754525AbcEDVth (ORCPT ); Wed, 4 May 2016 17:49:37 -0400 Received: from cloud.av8n.com ([174.136.99.130]:51307 "EHLO cloud.av8n.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754518AbcEDVtg (ORCPT ); Wed, 4 May 2016 17:49:36 -0400 X-Greylist: delayed 388 seconds by postgrey-1.27 at vger.kernel.org; Wed, 04 May 2016 17:49:36 EDT Received: (qmail 5013 invoked from network); 4 May 2016 21:42:54 -0000 Received: from ip68-228-37-217.tc.ph.cox.net (HELO ?10.10.10.100?) (smtp@68.228.37.217) by cloud.av8n.com with SMTP; 4 May 2016 21:42:54 -0000 Subject: Re: [PATCH 1/3] random: replace non-blocking pool with a Chacha20-based CRNG To: tytso@mit.edu, "H. Peter Anvin" , noloader@gmail.com, linux-kernel@vger.kernel.org, Stephan Mueller , Herbert Xu , andi@firstfloor.org, Sandy Harris , cryptography@lakedaemon.net, linux-crypto@vger.kernel.org References: <1462170413-7164-1-git-send-email-tytso@mit.edu> <1462170413-7164-2-git-send-email-tytso@mit.edu> <20160504174901.GC3901@thunk.org> <20160504190723.GD3901@thunk.org> From: John Denker Message-ID: <572A6CDD.10503@av8n.com> Date: Wed, 4 May 2016 14:42:53 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.7.2 MIME-Version: 1.0 In-Reply-To: <20160504190723.GD3901@thunk.org> Quilt: 3yd6DE3/cvgP/6v@0AecpEECljiGnVc5lLTP82h52cnnOXqRlQfjtdCetq3Ablvq Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Spam-Status: No, score=-9.0 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, T_TVD_MIME_EPI, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On 05/04/2016 12:07 PM, tytso@thunk.org wrote: > it doesn't hit the > UB case which Jeffrey was concerned about. That should be good enough for present purposes.... However, in the interests of long-term maintainability, I would suggest sticking in a comment or assertion saying that ror32(,shift) is never called with shift=0. This can be removed if/when bitops.h is upgraded. There is a track record of compilers doing Bad Things in response to UB code, including some very counterintuitive Bad Things. On Wed, May 04, 2016 at 11:29:57AM -0700, H. Peter Anvin wrote: >> >> If bitops.h doesn't do the right thing, we need to >> fix bitops.h. Most of the ror and rol functions in linux/bitops.h should be considered unsafe, as currently implemented. http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/linux/bitops.h?id=04974df8049fc4240d22759a91e035082ccd18b4#n103 I don't see anything in the file that suggests any limits on the range of the second argument. So at best it is an undocumented trap for the unwary. This has demonstrably been a problem in the past. The explanation in the attached fix-rol32.diff makes amusing reading. Of the eight functions ror64, rol64, ror32, rol32, ror16, rol16, ror8, and rol8, only one of them can handle shifting by zero, namely rol32. It was upgraded on Thu Dec 3 22:04:01 2015; see the attached fix-rol32.diff. I find it very odd that the other seven functions were not upgraded. I suggest the attached fix-others.diff would make things more consistent. Beware that shifting by an amount >= the number of bits in the word remains Undefined Behavior. This should be either documented or fixed. It could be fixed easily enough. commit 03b97eeb5401ede1e1d7b6fbf6a9575db8d0efa6 Author: John Denker Date: Wed May 4 13:55:51 2016 -0700 Make ror64, rol64, ror32, ror16, rol16, ror8, and rol8 consistent with rol32 in their handling of shifting by a zero amount. Same overall rationale as in d7e35dfa, just more consistently applied. Beware that shifting by an amount >= the number of bits in the word remains Undefined Behavior. This should be either documented or fixed. It could be fixed easily enough. diff --git a/include/linux/bitops.h b/include/linux/bitops.h index defeaac..97096b4 100644 --- a/include/linux/bitops.h +++ b/include/linux/bitops.h @@ -87,7 +87,7 @@ static __always_inline unsigned long hweight_long(unsigned long w) */ static inline __u64 rol64(__u64 word, unsigned int shift) { - return (word << shift) | (word >> (64 - shift)); + return (word << shift) | (word >> ((-shift) & 64)); } /** @@ -97,7 +97,7 @@ static inline __u64 rol64(__u64 word, unsigned int shift) */ static inline __u64 ror64(__u64 word, unsigned int shift) { - return (word >> shift) | (word << (64 - shift)); + return (word >> shift) | (word << ((-shift) & 64)); } /** @@ -117,7 +117,7 @@ static inline __u32 rol32(__u32 word, unsigned int shift) */ static inline __u32 ror32(__u32 word, unsigned int shift) { - return (word >> shift) | (word << (32 - shift)); + return (word >> shift) | (word << ((-shift) & 31)); } /** @@ -127,7 +127,7 @@ static inline __u32 ror32(__u32 word, unsigned int shift) */ static inline __u16 rol16(__u16 word, unsigned int shift) { - return (word << shift) | (word >> (16 - shift)); + return (word << shift) | (word >> ((-shift) & 16)); } /** @@ -137,7 +137,7 @@ static inline __u16 rol16(__u16 word, unsigned int shift) */ static inline __u16 ror16(__u16 word, unsigned int shift) { - return (word >> shift) | (word << (16 - shift)); + return (word >> shift) | (word << ((-shift) & 16)); } /** @@ -147,7 +147,7 @@ static inline __u16 ror16(__u16 word, unsigned int shift) */ static inline __u8 rol8(__u8 word, unsigned int shift) { - return (word << shift) | (word >> (8 - shift)); + return (word << shift) | (word >> ((-shift) & 8)); } /** @@ -157,7 +157,7 @@ static inline __u8 rol8(__u8 word, unsigned int shift) */ static inline __u8 ror8(__u8 word, unsigned int shift) { - return (word >> shift) | (word << (8 - shift)); + return (word >> shift) | (word << ((-shift) & 8)); } /**