Message ID | 20190906031306.GA20435@gondor.apana.org.au (mailing list archive) |
---|---|
State | Accepted |
Delegated to: | Herbert Xu |
Headers | show |
Series | [v2] crypto: skcipher - Unmap pages after an external error | expand |
On Thu, 5 Sep 2019 at 20:13, Herbert Xu <herbert@gondor.apana.org.au> wrote: > > skcipher_walk_done may be called with an error by internal or > external callers. For those internal callers we shouldn't unmap > pages but for external callers we must unmap any pages that are > in use. > > This patch distinguishes between the two cases by checking whether > walk->nbytes is zero or not. For internal callers, we now set > walk->nbytes to zero prior to the call. For external callers, > walk->nbytes has always been non-zero (as zero is used to indicate > the termination of a walk). > > Reported-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> > Fixes: 5cde0af2a982 ("[CRYPTO] cipher: Added block cipher type") > Cc: <stable@vger.kernel.org> > Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> > > diff --git a/crypto/skcipher.c b/crypto/skcipher.c > index 5d836fc3df3e..22753c1c7202 100644 > --- a/crypto/skcipher.c > +++ b/crypto/skcipher.c > @@ -90,7 +90,7 @@ static inline u8 *skcipher_get_spot(u8 *start, unsigned int len) > return max(start, end_page); > } > > -static void skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize) > +static int skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize) > { > u8 *addr; > > @@ -98,19 +98,21 @@ static void skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize) > addr = skcipher_get_spot(addr, bsize); > scatterwalk_copychunks(addr, &walk->out, bsize, > (walk->flags & SKCIPHER_WALK_PHYS) ? 2 : 1); > + return 0; > } > > int skcipher_walk_done(struct skcipher_walk *walk, int err) > { > - unsigned int n; /* bytes processed */ > - bool more; > + unsigned int n = walk->nbytes; > + unsigned int nbytes = 0; > > - if (unlikely(err < 0)) > + if (!n) > goto finish; > > - n = walk->nbytes - err; > - walk->total -= n; > - more = (walk->total != 0); > + if (likely(err >= 0)) { > + n -= err; > + nbytes = walk->total - n; > + } > > if (likely(!(walk->flags & (SKCIPHER_WALK_PHYS | > SKCIPHER_WALK_SLOW | With this change, we still copy out the output in the SKCIPHER_WALK_COPY or SKCIPHER_WALK_SLOW cases. I'd expect the failure case to only do the kunmap()s, but otherwise not make any changes that are visible to the caller. > @@ -126,7 +128,7 @@ int skcipher_walk_done(struct skcipher_walk *walk, int err) > memcpy(walk->dst.virt.addr, walk->page, n); > skcipher_unmap_dst(walk); > } else if (unlikely(walk->flags & SKCIPHER_WALK_SLOW)) { > - if (err) { > + if (err > 0) { > /* > * Didn't process all bytes. Either the algorithm is > * broken, or this was the last step and it turned out > @@ -134,27 +136,29 @@ int skcipher_walk_done(struct skcipher_walk *walk, int err) > * the algorithm requires it. > */ > err = -EINVAL; > - goto finish; > - } > - skcipher_done_slow(walk, n); > - goto already_advanced; > + nbytes = 0; > + } else > + n = skcipher_done_slow(walk, n); > } > > + if (err > 0) > + err = 0; > + > + walk->total = nbytes; > + walk->nbytes = 0; > + > scatterwalk_advance(&walk->in, n); > scatterwalk_advance(&walk->out, n); > -already_advanced: > - scatterwalk_done(&walk->in, 0, more); > - scatterwalk_done(&walk->out, 1, more); > + scatterwalk_done(&walk->in, 0, nbytes); > + scatterwalk_done(&walk->out, 1, nbytes); > > - if (more) { > + if (nbytes) { > crypto_yield(walk->flags & SKCIPHER_WALK_SLEEP ? > CRYPTO_TFM_REQ_MAY_SLEEP : 0); > return skcipher_walk_next(walk); > } > - err = 0; > -finish: > - walk->nbytes = 0; > > +finish: > /* Short-circuit for the common/fast path. */ > if (!((unsigned long)walk->buffer | (unsigned long)walk->page)) > goto out; > -- > Email: Herbert Xu <herbert@gondor.apana.org.au> > Home Page: http://gondor.apana.org.au/~herbert/ > PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
On Fri, Sep 06, 2019 at 05:52:56PM -0700, Ard Biesheuvel wrote: > > With this change, we still copy out the output in the > SKCIPHER_WALK_COPY or SKCIPHER_WALK_SLOW cases. I'd expect the failure > case to only do the kunmap()s, but otherwise not make any changes that > are visible to the caller. I don't think it matters. After all, for the fast/common path whatever changes that have been made will be visible to the caller. I don't see the point in making the slow-path different in this respect. It also makes no sense to optimise specifically for the uncommon error case on the slow-path. Cheers,
On Fri, 6 Sep 2019 at 18:19, Herbert Xu <herbert@gondor.apana.org.au> wrote: > > On Fri, Sep 06, 2019 at 05:52:56PM -0700, Ard Biesheuvel wrote: > > > > With this change, we still copy out the output in the > > SKCIPHER_WALK_COPY or SKCIPHER_WALK_SLOW cases. I'd expect the failure > > case to only do the kunmap()s, but otherwise not make any changes that > > are visible to the caller. > > I don't think it matters. After all, for the fast/common path > whatever changes that have been made will be visible to the caller. > I don't see the point in making the slow-path different in this > respect. It also makes no sense to optimise specifically for the > uncommon error case on the slow-path. > The point is that doing skcipher_walk_virt(&walk, ...); skcipher_walk_done(&walk, -EFOO); may clobber your data if you are executing in place (unless I am missing something) If skcipher_walk_done() is called with an error, it should really just clean up after it self, but not copy back the unknown contents of temporary buffers.
On Fri, Sep 06, 2019 at 06:32:29PM -0700, Ard Biesheuvel wrote: > > The point is that doing > > skcipher_walk_virt(&walk, ...); > skcipher_walk_done(&walk, -EFOO); > > may clobber your data if you are executing in place (unless I am > missing something) You mean encrypting in place? If you're encrypting in place you're usually on the zero-copy fast path so whatever is left-behind by the algorithm will be visible anyway without any copying. > If skcipher_walk_done() is called with an error, it should really just > clean up after it self, but not copy back the unknown contents of > temporary buffers. We're not copying uninitialised kernel memory. The temporary space starts out as a copy of the source and we're just copying it to the destination. Cheers,
On Fri, 6 Sep 2019 at 18:56, Herbert Xu <herbert@gondor.apana.org.au> wrote: > > On Fri, Sep 06, 2019 at 06:32:29PM -0700, Ard Biesheuvel wrote: > > > > The point is that doing > > > > skcipher_walk_virt(&walk, ...); > > skcipher_walk_done(&walk, -EFOO); > > > > may clobber your data if you are executing in place (unless I am > > missing something) > > You mean encrypting in place? If you're encrypting in place you're > usually on the zero-copy fast path so whatever is left-behind by the > algorithm will be visible anyway without any copying. > > > If skcipher_walk_done() is called with an error, it should really just > > clean up after it self, but not copy back the unknown contents of > > temporary buffers. > > We're not copying uninitialised kernel memory. The temporary space > starts out as a copy of the source and we're just copying it to the > destination. > Right. In that case, I guess it is safe. I've tested my XTS/CTS changes (which call skcipher_walk_done() with an error value in some cases) with Eric's fuzz testing enabled, and it all works fine, so Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Thanks, Ard.
diff --git a/crypto/skcipher.c b/crypto/skcipher.c index 5d836fc3df3e..22753c1c7202 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c @@ -90,7 +90,7 @@ static inline u8 *skcipher_get_spot(u8 *start, unsigned int len) return max(start, end_page); } -static void skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize) +static int skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize) { u8 *addr; @@ -98,19 +98,21 @@ static void skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize) addr = skcipher_get_spot(addr, bsize); scatterwalk_copychunks(addr, &walk->out, bsize, (walk->flags & SKCIPHER_WALK_PHYS) ? 2 : 1); + return 0; } int skcipher_walk_done(struct skcipher_walk *walk, int err) { - unsigned int n; /* bytes processed */ - bool more; + unsigned int n = walk->nbytes; + unsigned int nbytes = 0; - if (unlikely(err < 0)) + if (!n) goto finish; - n = walk->nbytes - err; - walk->total -= n; - more = (walk->total != 0); + if (likely(err >= 0)) { + n -= err; + nbytes = walk->total - n; + } if (likely(!(walk->flags & (SKCIPHER_WALK_PHYS | SKCIPHER_WALK_SLOW | @@ -126,7 +128,7 @@ int skcipher_walk_done(struct skcipher_walk *walk, int err) memcpy(walk->dst.virt.addr, walk->page, n); skcipher_unmap_dst(walk); } else if (unlikely(walk->flags & SKCIPHER_WALK_SLOW)) { - if (err) { + if (err > 0) { /* * Didn't process all bytes. Either the algorithm is * broken, or this was the last step and it turned out @@ -134,27 +136,29 @@ int skcipher_walk_done(struct skcipher_walk *walk, int err) * the algorithm requires it. */ err = -EINVAL; - goto finish; - } - skcipher_done_slow(walk, n); - goto already_advanced; + nbytes = 0; + } else + n = skcipher_done_slow(walk, n); } + if (err > 0) + err = 0; + + walk->total = nbytes; + walk->nbytes = 0; + scatterwalk_advance(&walk->in, n); scatterwalk_advance(&walk->out, n); -already_advanced: - scatterwalk_done(&walk->in, 0, more); - scatterwalk_done(&walk->out, 1, more); + scatterwalk_done(&walk->in, 0, nbytes); + scatterwalk_done(&walk->out, 1, nbytes); - if (more) { + if (nbytes) { crypto_yield(walk->flags & SKCIPHER_WALK_SLEEP ? CRYPTO_TFM_REQ_MAY_SLEEP : 0); return skcipher_walk_next(walk); } - err = 0; -finish: - walk->nbytes = 0; +finish: /* Short-circuit for the common/fast path. */ if (!((unsigned long)walk->buffer | (unsigned long)walk->page)) goto out;
skcipher_walk_done may be called with an error by internal or external callers. For those internal callers we shouldn't unmap pages but for external callers we must unmap any pages that are in use. This patch distinguishes between the two cases by checking whether walk->nbytes is zero or not. For internal callers, we now set walk->nbytes to zero prior to the call. For external callers, walk->nbytes has always been non-zero (as zero is used to indicate the termination of a walk). Reported-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Fixes: 5cde0af2a982 ("[CRYPTO] cipher: Added block cipher type") Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>