Skip to content

Commit aec286c

Browse files
ebiggersherbertx
authored andcommitted
crypto: lrw - don't access already-freed walk.iv
If the user-provided IV needs to be aligned to the algorithm's alignmask, then skcipher_walk_virt() copies the IV into a new aligned buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then if the caller unconditionally accesses walk.iv, it's a use-after-free. Fix this in the LRW template by checking the return value of skcipher_walk_virt(). This bug was detected by my patches that improve testmgr to fuzz algorithms against their generic implementation. When the extra self-tests were run on a KASAN-enabled kernel, a KASAN use-after-free splat occured during lrw(aes) testing. Fixes: c778f96 ("crypto: lrw - Optimize tweak computation") Cc: <[email protected]> # v4.20+ Cc: Ondrej Mosnacek <[email protected]> Signed-off-by: Eric Biggers <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
1 parent 11fe71f commit aec286c

File tree

1 file changed

+3
-1
lines changed

1 file changed

+3
-1
lines changed

crypto/lrw.c

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -162,8 +162,10 @@ static int xor_tweak(struct skcipher_request *req, bool second_pass)
162162
}
163163

164164
err = skcipher_walk_virt(&w, req, false);
165-
iv = (__be32 *)w.iv;
165+
if (err)
166+
return err;
166167

168+
iv = (__be32 *)w.iv;
167169
counter[0] = be32_to_cpu(iv[3]);
168170
counter[1] = be32_to_cpu(iv[2]);
169171
counter[2] = be32_to_cpu(iv[1]);

0 commit comments

Comments
 (0)