py/gc: Implement GC running by allocation threshold.
Currently, MicroPython runs GC when it could not allocate a block of memory,
which happens when heap is exhausted. However, that policy can't work well
with "inifinity" heaps, e.g. backed by a virtual memory - there will be a
lot of swap thrashing long before VM will be exhausted. Instead, in such
cases "allocation threshold" policy is used: a GC is run after some number of
allocations have been made. Details vary, for example, number or total amount
of allocations can be used, threshold may be self-adjusting based on GC
outcome, etc.
This change implements a simple variant of such policy for MicroPython. Amount
of allocated memory so far is used for threshold, to make it useful to typical
finite-size, and small, heaps as used with MicroPython ports. And such GC policy
is indeed useful for such types of heaps too, as it allows to better control
fragmentation. For example, if a threshold is set to half size of heap, then
for an application which usually makes big number of small allocations, that
will (try to) keep half of heap memory in a nice defragmented state for an
occasional large allocation.
For an application which doesn't exhibit such behavior, there won't be any
visible effects, except for GC running more frequently, which however may
affect performance. To address this, the GC threshold is configurable, and
by default is off so far. It's configured with gc.threshold(amount_in_bytes)
call (can be queries without an argument).
diff --git a/py/gc.c b/py/gc.c
index 1c1865c..97868c0 100644
--- a/py/gc.c
+++ b/py/gc.c
@@ -152,6 +152,12 @@
// allow auto collection
MP_STATE_MEM(gc_auto_collect_enabled) = 1;
+ #if MICROPY_GC_ALLOC_THRESHOLD
+ // by default, maxuint for gc threshold, effectively turning gc-by-threshold off
+ MP_STATE_MEM(gc_alloc_threshold) = (size_t)-1;
+ MP_STATE_MEM(gc_alloc_amount) = 0;
+ #endif
+
#if MICROPY_PY_THREAD
mp_thread_mutex_init(&MP_STATE_MEM(gc_mutex));
#endif
@@ -294,6 +300,9 @@
void gc_collect_start(void) {
GC_ENTER();
MP_STATE_MEM(gc_lock_depth)++;
+ #if MICROPY_GC_ALLOC_THRESHOLD
+ MP_STATE_MEM(gc_alloc_amount) = 0;
+ #endif
MP_STATE_MEM(gc_stack_overflow) = 0;
MP_STATE_MEM(gc_sp) = MP_STATE_MEM(gc_stack);
// Trace root pointers. This relies on the root pointers being organised
@@ -405,6 +414,15 @@
size_t start_block;
size_t n_free = 0;
int collected = !MP_STATE_MEM(gc_auto_collect_enabled);
+
+ #if MICROPY_GC_ALLOC_THRESHOLD
+ if (!collected && MP_STATE_MEM(gc_alloc_amount) >= MP_STATE_MEM(gc_alloc_threshold)) {
+ GC_EXIT();
+ gc_collect();
+ GC_ENTER();
+ }
+ #endif
+
for (;;) {
// look for a run of n_blocks available blocks
@@ -456,6 +474,10 @@
void *ret_ptr = (void*)(MP_STATE_MEM(gc_pool_start) + start_block * BYTES_PER_BLOCK);
DEBUG_printf("gc_alloc(%p)\n", ret_ptr);
+ #if MICROPY_GC_ALLOC_THRESHOLD
+ MP_STATE_MEM(gc_alloc_amount) += n_blocks;
+ #endif
+
GC_EXIT();
// zero out the additional bytes of the newly allocated blocks