Skip to content

Commit c146a2b

Browse files
ramosian-glidertorvalds
authored andcommitted
mm, kasan: account for object redzone in SLUB's nearest_obj()
When looking up the nearest SLUB object for a given address, correctly calculate its offset if SLAB_RED_ZONE is enabled for that cache. Previously, when KASAN had detected an error on an object from a cache with SLAB_RED_ZONE set, the actual start address of the object was miscalculated, which led to random stacks having been reported. When looking up the nearest SLUB object for a given address, correctly calculate its offset if SLAB_RED_ZONE is enabled for that cache. Fixes: 7ed2f9e ("mm, kasan: SLAB support") Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Alexander Potapenko <[email protected]> Cc: Andrey Konovalov <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Steven Rostedt (Red Hat) <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Kostya Serebryany <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Kuthonuzo Luruo <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent 734537c commit c146a2b

File tree

2 files changed

+7
-5
lines changed

2 files changed

+7
-5
lines changed

include/linux/slub_def.h

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -119,15 +119,17 @@ static inline void sysfs_slab_remove(struct kmem_cache *s)
119119
void object_err(struct kmem_cache *s, struct page *page,
120120
u8 *object, char *reason);
121121

122+
void *fixup_red_left(struct kmem_cache *s, void *p);
123+
122124
static inline void *nearest_obj(struct kmem_cache *cache, struct page *page,
123125
void *x) {
124126
void *object = x - (x - page_address(page)) % cache->size;
125127
void *last_object = page_address(page) +
126128
(page->objects - 1) * cache->size;
127-
if (unlikely(object > last_object))
128-
return last_object;
129-
else
130-
return object;
129+
void *result = (unlikely(object > last_object)) ? last_object : object;
130+
131+
result = fixup_red_left(cache, result);
132+
return result;
131133
}
132134

133135
#endif /* _LINUX_SLUB_DEF_H */

mm/slub.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -124,7 +124,7 @@ static inline int kmem_cache_debug(struct kmem_cache *s)
124124
#endif
125125
}
126126

127-
static inline void *fixup_red_left(struct kmem_cache *s, void *p)
127+
inline void *fixup_red_left(struct kmem_cache *s, void *p)
128128
{
129129
if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE)
130130
p += s->red_left_pad;

0 commit comments

Comments
 (0)