Skip to content

Commit c93989e

Browse files
Michal Hockosfrothwell
authored andcommitted
mm, oom: rework oom detection
As pointed by Linus [2][3] relying on zone_reclaimable as a way to communicate the reclaim progress is rater dubious. I tend to agree, not only it is really obscure, it is not hard to imagine cases where a single page freed in the loop keeps all the reclaimers looping without getting any progress because their gfp_mask wouldn't allow to get that page anyway (e.g. single GFP_ATOMIC alloc and free loop). This is rather rare so it doesn't happen in the practice but the current logic which we have is rather obscure and hard to follow a also non-deterministic. This is an attempt to make the OOM detection more deterministic and easier to follow because each reclaimer basically tracks its own progress which is implemented at the page allocator layer rather spread out between the allocator and the reclaim. The more on the implementation is described in the first patch. I have tested several different scenarios but it should be clear that testing OOM killer is quite hard to be representative. There is usually a tiny gap between almost OOM and full blown OOM which is often time sensitive. Anyway, I have tested the following 3 scenarios and I would appreciate if there are more to test. Testing environment: a virtual machine with 2G of RAM and 2CPUs without any swap to make the OOM more deterministic. 1) 2 writers (each doing dd with 4M blocks to an xfs partition with 1G size, removes the files and starts over again) running in parallel for 10s to build up a lot of dirty pages when 100 parallel mem_eaters (anon private populated mmap which waits until it gets signal) with 80M each. This causes an OOM flood of course and I have compared both patched and unpatched kernels. The test is considered finished after there are no OOM conditions detected. This should tell us whether there are any excessive kills or some of them premature: I have performed two runs this time each after a fresh boot. * base kernel $ grep "Killed process" base-oom-run1.log | tail -n1 [ 211.824379] Killed process 3086 (mem_eater) total-vm:85852kB, anon-rss:81996kB, file-rss:332kB, shmem-rss:0kB $ grep "Killed process" base-oom-run2.log | tail -n1 [ 157.188326] Killed process 3094 (mem_eater) total-vm:85852kB, anon-rss:81996kB, file-rss:368kB, shmem-rss:0kB $ grep "invoked oom-killer" base-oom-run1.log | wc -l 78 $ grep "invoked oom-killer" base-oom-run2.log | wc -l 76 The number of OOM invocations is consistent with my last measurements but the runtime is way too different (it took 800+s). One thing that could have skewed results was that I was tail -f the serial log on the host system to see the progress. I have stopped doing that. The results are more consistent now but still too different from the last time. This is really weird so I've retested with the last 4.2 mmotm again and I am getting consistent ~220s which is really close to the above. If I apply the WQ vmstat patch on top I am getting close to 160s so the stale vmstat counters made a difference which is to be expected. I have a new SSD in my laptop which migh have made a difference but I wouldn't expect it to be that large. $ grep "DMA32.*all_unreclaimable? no" base-oom-run1.log | wc -l 4 $ grep "DMA32.*all_unreclaimable? no" base-oom-run2.log | wc -l 1 * patched kernel $ grep "Killed process" patched-oom-run1.log | tail -n1 [ 341.164930] Killed process 3099 (mem_eater) total-vm:85852kB, anon-rss:82000kB, file-rss:336kB, shmem-rss:0kB $ grep "Killed process" patched-oom-run2.log | tail -n1 [ 349.111539] Killed process 3082 (mem_eater) total-vm:85852kB, anon-rss:81996kB, file-rss:4kB, shmem-rss:0kB $ grep "invoked oom-killer" patched-oom-run1.log | wc -l 78 $ grep "invoked oom-killer" patched-oom-run2.log | wc -l 77 $ grep "DMA32.*all_unreclaimable? no" patched-oom-run1.log | wc -l 1 $ grep "DMA32.*all_unreclaimable? no" patched-oom-run2.log | wc -l 0 So the number of OOM killer invocation is the same but the overall runtime of the test was much longer with the patched kernel. This can be attributed to more retries in general. The results from the base kernel are quite inconsitent and I think that consistency is better here. 2) 2 writers again with 10s of run and then 10 mem_eaters to consume as much memory as possible without triggering the OOM killer. This required a lot of tuning but I've considered 3 consecutive runs without OOM as a success. * base kernel size=$(awk '/MemFree/{printf "%dK", ($2/10)-(15*1024)}' /proc/meminfo) * patched kernel size=$(awk '/MemFree/{printf "%dK", ($2/10)-(9*1024)}' /proc/meminfo) It was -14M for the base 4.2 kernel and -7500M for the patched 4.2 kernel in my last measurements. The patched kernel handled the low mem conditions better and fired OOM killer later. 3) Costly high-order allocations with a limited amount of memory. Start 10 memeaters in parallel each with size=$(awk '/MemTotal/{printf "%d\n", $2/10}' /proc/meminfo) This will cause an OOM killer which will kill one of them which will free up 200M and then try to use all the remaining space for hugetlb pages. See how many of them will pass kill everything, wait 2s and try again. This tests whether we do not fail __GFP_REPEAT costly allocations too early now. * base kernel $ sort base-hugepages.log | uniq -c 1 64 13 65 6 66 20 Trying to allocate 73 * patched kernel $ sort patched-hugepages.log | uniq -c 17 65 3 66 20 Trying to allocate 73 This also doesn't look very bad but this particular test is quite timing sensitive. The above results do seem optimistic but more loads should be tested obviously. I would really appreciate a feedback on the approach I have chosen before I go into more tuning. Is this viable way to go? [1] http://lkml.kernel.org/r/[email protected] [2] http://lkml.kernel.org/r/CA+55aFwapaED7JV6zm-NVkP-jKie+eQ1vDXWrKD=SkbshZSgmw@mail.gmail.com [3] http://lkml.kernel.org/r/CA+55aFxwg=vS2nrXsQhAUzPQDGb8aQpZi0M7UUh21ftBo-z46Q@mail.gmail.com This patch (of 3): __alloc_pages_slowpath has traditionally relied on the direct reclaim and did_some_progress as an indicator that it makes sense to retry allocation rather than declaring OOM. shrink_zones had to rely on zone_reclaimable if shrink_zone didn't make any progress to prevent from a premature OOM killer invocation - the LRU might be full of dirty or writeback pages and direct reclaim cannot clean those up. zone_reclaimable allows to rescan the reclaimable lists several times and restart if a page is freed. This is really subtle behavior and it might lead to a livelock when a single freed page keeps allocator looping but the current task will not be able to allocate that single page. OOM killer would be more appropriate than looping without any progress for unbounded amount of time. This patch changes OOM detection logic and pulls it out from shrink_zone which is too low to be appropriate for any high level decisions such as OOM which is per zonelist property. It is __alloc_pages_slowpath which knows how many attempts have been done and what was the progress so far therefore it is more appropriate to implement this logic. The new heuristic is implemented in should_reclaim_retry helper called from __alloc_pages_slowpath. It tries to be more deterministic and easier to follow. It builds on an assumption that retrying makes sense only if the currently reclaimable memory + free pages would allow the current allocation request to succeed (as per __zone_watermark_ok) at least for one zone in the usable zonelist. This alone wouldn't be sufficient, though, because the writeback might get stuck and reclaimable pages might be pinned for a really long time or even depend on the current allocation context. Therefore there is a feedback mechanism implemented which reduces the reclaim target after each reclaim round without any progress. This means that we should eventually converge to only NR_FREE_PAGES as the target and fail on the wmark check and proceed to OOM. The backoff is simple and linear with 1/16 of the reclaimable pages for each round without any progress. We are optimistic and reset counter for successful reclaim rounds. Costly high order pages mostly preserve their semantic and those without __GFP_REPEAT fail right away while those which have the flag set will back off after the amount of reclaimable pages reaches equivalent of the requested order. The only difference is that if there was no progress during the reclaim we rely on zone watermark check. This is more logical thing to do than previous 1<<order attempts which were a result of zone_reclaimable faking the progress. [[email protected]: separate the heuristic into should_reclaim_retry] [[email protected]: use zone_page_state_snapshot for NR_FREE_PAGES] [[email protected]: shrink_zones doesn't need to return anything] Signed-off-by: Michal Hocko <[email protected]> Acked-by: Hillf Danton <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Mel Gorman <[email protected]> Cc: David Rientjes <[email protected]> Cc: Tetsuo Handa <[email protected]> Cc: Hillf Danton <[email protected]> Cc: KAMEZAWA Hiroyuki <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
1 parent 89d95aa commit c93989e

File tree

3 files changed

+88
-29
lines changed

3 files changed

+88
-29
lines changed

include/linux/swap.h

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -316,6 +316,7 @@ extern void lru_cache_add_active_or_unevictable(struct page *page,
316316
struct vm_area_struct *vma);
317317

318318
/* linux/mm/vmscan.c */
319+
extern unsigned long zone_reclaimable_pages(struct zone *zone);
319320
extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
320321
gfp_t gfp_mask, nodemask_t *mask);
321322
extern int __isolate_lru_page(struct page *page, isolate_mode_t mode);

mm/page_alloc.c

Lines changed: 83 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -2976,6 +2976,75 @@ static inline bool is_thp_gfp_mask(gfp_t gfp_mask)
29762976
return (gfp_mask & (GFP_TRANSHUGE | __GFP_KSWAPD_RECLAIM)) == GFP_TRANSHUGE;
29772977
}
29782978

2979+
/*
2980+
* Maximum number of reclaim retries without any progress before OOM killer
2981+
* is consider as the only way to move forward.
2982+
*/
2983+
#define MAX_RECLAIM_RETRIES 16
2984+
2985+
/*
2986+
* Checks whether it makes sense to retry the reclaim to make a forward progress
2987+
* for the given allocation request.
2988+
* The reclaim feedback represented by did_some_progress (any progress during
2989+
* the last reclaim round), pages_reclaimed (cumulative number of reclaimed
2990+
* pages) and no_progress_loops (number of reclaim rounds without any progress
2991+
* in a row) is considered as well as the reclaimable pages on the applicable
2992+
* zone list (with a backoff mechanism which is a function of no_progress_loops).
2993+
*
2994+
* Returns true if a retry is viable or false to enter the oom path.
2995+
*/
2996+
static inline bool
2997+
should_reclaim_retry(gfp_t gfp_mask, unsigned order,
2998+
struct alloc_context *ac, int alloc_flags,
2999+
bool did_some_progress, unsigned long pages_reclaimed,
3000+
int no_progress_loops)
3001+
{
3002+
struct zone *zone;
3003+
struct zoneref *z;
3004+
3005+
/*
3006+
* Make sure we converge to OOM if we cannot make any progress
3007+
* several times in the row.
3008+
*/
3009+
if (no_progress_loops > MAX_RECLAIM_RETRIES)
3010+
return false;
3011+
3012+
/* Do not retry high order allocations unless they are __GFP_REPEAT */
3013+
if (order > PAGE_ALLOC_COSTLY_ORDER) {
3014+
if (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order))
3015+
return false;
3016+
3017+
if (did_some_progress)
3018+
return true;
3019+
}
3020+
3021+
/*
3022+
* Keep reclaiming pages while there is a chance this will lead somewhere.
3023+
* If none of the target zones can satisfy our allocation request even
3024+
* if all reclaimable pages are considered then we are screwed and have
3025+
* to go OOM.
3026+
*/
3027+
for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
3028+
unsigned long available;
3029+
3030+
available = zone_reclaimable_pages(zone);
3031+
available -= DIV_ROUND_UP(no_progress_loops * available, MAX_RECLAIM_RETRIES);
3032+
available += zone_page_state_snapshot(zone, NR_FREE_PAGES);
3033+
3034+
/*
3035+
* Would the allocation succeed if we reclaimed the whole available?
3036+
*/
3037+
if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
3038+
ac->high_zoneidx, alloc_flags, available)) {
3039+
/* Wait for some write requests to complete then retry */
3040+
wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
3041+
return true;
3042+
}
3043+
}
3044+
3045+
return false;
3046+
}
3047+
29793048
static inline struct page *
29803049
__alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
29813050
struct alloc_context *ac)
@@ -2988,6 +3057,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
29883057
enum migrate_mode migration_mode = MIGRATE_ASYNC;
29893058
bool deferred_compaction = false;
29903059
int contended_compaction = COMPACT_CONTENDED_NONE;
3060+
int no_progress_loops = 0;
29913061

29923062
/*
29933063
* In the slowpath, we sanity check order to avoid ever trying to
@@ -3147,23 +3217,28 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
31473217
if (gfp_mask & __GFP_NORETRY)
31483218
goto noretry;
31493219

3150-
/* Keep reclaiming pages as long as there is reasonable progress */
3151-
pages_reclaimed += did_some_progress;
3152-
if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
3153-
((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
3154-
/* Wait for some write requests to complete then retry */
3155-
wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
3156-
goto retry;
3220+
if (did_some_progress) {
3221+
no_progress_loops = 0;
3222+
pages_reclaimed += did_some_progress;
3223+
} else {
3224+
no_progress_loops++;
31573225
}
31583226

3227+
if (should_reclaim_retry(gfp_mask, order, ac, alloc_flags,
3228+
did_some_progress > 0, pages_reclaimed,
3229+
no_progress_loops))
3230+
goto retry;
3231+
31593232
/* Reclaim has failed us, start killing things */
31603233
page = __alloc_pages_may_oom(gfp_mask, order, ac, &did_some_progress);
31613234
if (page)
31623235
goto got_pg;
31633236

31643237
/* Retry as long as the OOM killer is making progress */
3165-
if (did_some_progress)
3238+
if (did_some_progress) {
3239+
no_progress_loops = 0;
31663240
goto retry;
3241+
}
31673242

31683243
noretry:
31693244
/*

mm/vmscan.c

Lines changed: 4 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -190,7 +190,7 @@ static bool sane_reclaim(struct scan_control *sc)
190190
}
191191
#endif
192192

193-
static unsigned long zone_reclaimable_pages(struct zone *zone)
193+
unsigned long zone_reclaimable_pages(struct zone *zone)
194194
{
195195
unsigned long nr;
196196

@@ -2531,18 +2531,15 @@ static inline bool compaction_ready(struct zone *zone, int order)
25312531
*
25322532
* If a zone is deemed to be full of pinned pages then just give it a light
25332533
* scan then give up on it.
2534-
*
2535-
* Returns true if a zone was reclaimable.
25362534
*/
2537-
static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
2535+
static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
25382536
{
25392537
struct zoneref *z;
25402538
struct zone *zone;
25412539
unsigned long nr_soft_reclaimed;
25422540
unsigned long nr_soft_scanned;
25432541
gfp_t orig_mask;
25442542
enum zone_type requested_highidx = gfp_zone(sc->gfp_mask);
2545-
bool reclaimable = false;
25462543

25472544
/*
25482545
* If the number of buffer_heads in the machine exceeds the maximum
@@ -2607,26 +2604,17 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
26072604
&nr_soft_scanned);
26082605
sc->nr_reclaimed += nr_soft_reclaimed;
26092606
sc->nr_scanned += nr_soft_scanned;
2610-
if (nr_soft_reclaimed)
2611-
reclaimable = true;
26122607
/* need some check for avoid more shrink_zone() */
26132608
}
26142609

2615-
if (shrink_zone(zone, sc, zone_idx(zone) == classzone_idx))
2616-
reclaimable = true;
2617-
2618-
if (global_reclaim(sc) &&
2619-
!reclaimable && zone_reclaimable(zone))
2620-
reclaimable = true;
2610+
shrink_zone(zone, sc, zone_idx(zone));
26212611
}
26222612

26232613
/*
26242614
* Restore to original mask to avoid the impact on the caller if we
26252615
* promoted it to __GFP_HIGHMEM.
26262616
*/
26272617
sc->gfp_mask = orig_mask;
2628-
2629-
return reclaimable;
26302618
}
26312619

26322620
/*
@@ -2651,7 +2639,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
26512639
int initial_priority = sc->priority;
26522640
unsigned long total_scanned = 0;
26532641
unsigned long writeback_threshold;
2654-
bool zones_reclaimable;
26552642
retry:
26562643
delayacct_freepages_start();
26572644

@@ -2662,7 +2649,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
26622649
vmpressure_prio(sc->gfp_mask, sc->target_mem_cgroup,
26632650
sc->priority);
26642651
sc->nr_scanned = 0;
2665-
zones_reclaimable = shrink_zones(zonelist, sc);
2652+
shrink_zones(zonelist, sc);
26662653

26672654
total_scanned += sc->nr_scanned;
26682655
if (sc->nr_reclaimed >= sc->nr_to_reclaim)
@@ -2709,10 +2696,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
27092696
goto retry;
27102697
}
27112698

2712-
/* Any of the zones still reclaimable? Don't OOM. */
2713-
if (zones_reclaimable)
2714-
return 1;
2715-
27162699
return 0;
27172700
}
27182701

0 commit comments

Comments
 (0)