Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
87 commits
Select commit Hold shift + click to select a range
41b5c64
very basic setup
ilanashapiro Jul 23, 2025
563906d
Merge branch 'Z3Prover:master' into parallel-solving
ilanashapiro Jul 24, 2025
4b87458
Merge remote-tracking branch 'upstream/ilana' into parallel-solving
ilanashapiro Jul 24, 2025
a6c51df
ensure solve_eqs is fully disabled when smt.solve_eqs=false, #7743
NikolajBjorner Jul 24, 2025
01633f7
respect smt configuration parameter in elim_unconstrained simplifier
NikolajBjorner Jul 24, 2025
1a488bb
indentation
NikolajBjorner Jul 25, 2025
6550495
add bash files for test runs
ilanashapiro Jul 25, 2025
e549286
add option to selectively disable variable solving for only ground ex…
NikolajBjorner Jul 26, 2025
95be0cf
remove verbose output
NikolajBjorner Jul 26, 2025
0528c86
fix #7745
NikolajBjorner Jul 26, 2025
8e1a528
ensure atomic constraints are processed by arithmetic solver
NikolajBjorner Jul 26, 2025
1f8b081
#7739 optimization
NikolajBjorner Jul 26, 2025
ad2934f
fix unsound len(substr) axiom
NikolajBjorner Jul 26, 2025
eb24488
FreshConst is_sort (#7748)
humnrdble Jul 27, 2025
e3139d4
#7750
NikolajBjorner Jul 27, 2025
0761394
Add parameter validation for selected API functions
NikolajBjorner Jul 27, 2025
67695b4
updates to ac-plugin
NikolajBjorner Jul 27, 2025
f77123c
enable passive, add check for bloom up-to-date
NikolajBjorner Jul 28, 2025
36fbee3
add top-k fixed-sized min-heap priority queue for top scoring literals
ilanashapiro Jul 28, 2025
f607a70
Merge branch 'Z3Prover:master' into parallel-solving
ilanashapiro Jul 28, 2025
4eeb98d
Merge branch 'ilana' into parallel-solving
NikolajBjorner Jul 29, 2025
2c188a5
set up worker thread batch manager for multithreaded batch cubes para…
ilanashapiro Jul 29, 2025
375d537
Merge branch 'parallel-solving' of github.com:ilanashapiro/z3 into pa…
ilanashapiro Jul 29, 2025
8a6cbec
fix bug in parallel solving batch setup
ilanashapiro Jul 30, 2025
e6213f8
fix bug
ilanashapiro Jul 31, 2025
2d876d5
allow for internalize implies
NikolajBjorner Aug 1, 2025
89cc9bd
disable pre-processing during cubing
NikolajBjorner Aug 1, 2025
12df9f8
Merge branch 'Z3Prover:master' into parallel-solving
ilanashapiro Aug 1, 2025
2a26776
debugging
ilanashapiro Aug 1, 2025
33c184f
Merge branch 'parallel-solving' of github.com:ilanashapiro/z3 into pa…
ilanashapiro Aug 1, 2025
97aa46a
remove default constructor
nunoplopes Aug 3, 2025
f23b053
remove a bunch of string copies
nunoplopes Aug 3, 2025
d8fafd8
Update euf_ac_plugin.cpp
NikolajBjorner Aug 3, 2025
b9b3e0d
Update euf_completion.cpp
NikolajBjorner Aug 3, 2025
d66fabe
Update smt_parallel.cpp
NikolajBjorner Aug 3, 2025
aac8787
process cubes as lists of individual lits
ilanashapiro Aug 4, 2025
a0a0670
Merge branch 'ilana' into parallel-solving
NikolajBjorner Aug 4, 2025
c9c3548
Merge branch 'Z3Prover:master' into parallel-solving
ilanashapiro Aug 4, 2025
7df95c0
merge
ilanashapiro Aug 4, 2025
cc8bc84
Merge branch 'parallel-solving' of github.com:ilanashapiro/z3 into pa…
ilanashapiro Aug 4, 2025
e520a42
merge
ilanashapiro Aug 4, 2025
7a8ba4b
Add support for Algebraic Datatypes in JavaScript/TypeScript bindings…
Copilot Aug 5, 2025
3982b29
chipping away at the new code structure
ilanashapiro Aug 5, 2025
0b21376
Merge branch 'Z3Prover:master' into parallel-solving
ilanashapiro Aug 5, 2025
2fce048
comments
ilanashapiro Aug 5, 2025
cdcc89a
merge
ilanashapiro Aug 5, 2025
d0bf711
Merge branch 'parallel-solving' of github.com:ilanashapiro/z3 into pa…
ilanashapiro Aug 5, 2025
723de8d
debug infinite recursion and split cubes on existing split atoms that…
ilanashapiro Aug 6, 2025
58e3121
share lemmas, learn from unsat core, try to debug a couple of things,…
ilanashapiro Aug 6, 2025
445339d
merge
ilanashapiro Aug 6, 2025
870729b
merge
ilanashapiro Aug 6, 2025
b1ab695
fix #7603: race condition in Ctrl-C handling (#7755)
nunoplopes Aug 6, 2025
d4a4dd6
add arithemtic saturation
NikolajBjorner Aug 6, 2025
b33f444
add an option to register callback on quantifier instantiation
NikolajBjorner Aug 7, 2025
aad511d
missing new closure
NikolajBjorner Aug 7, 2025
31a3037
add Z3_solver_propagate_on_binding to ml callback declarations
NikolajBjorner Aug 7, 2025
d218e87
add python file
levnach Aug 6, 2025
efa63db
debug under defined calls
levnach Aug 7, 2025
eeb1c18
more untangle params
levnach Aug 7, 2025
3eda386
precalc parameters to define the eval order
levnach Aug 7, 2025
f5016b4
remove a printout
levnach Aug 7, 2025
30830aa
rename a Python file
levnach Aug 7, 2025
fa3d341
add on_binding callbacks across APIs
NikolajBjorner Aug 7, 2025
d57dd6e
use jboolean in Native interface
NikolajBjorner Aug 7, 2025
0cefc92
register on_binding attribute
NikolajBjorner Aug 7, 2025
7ba967e
fix java build for java bindings
NikolajBjorner Aug 7, 2025
2ac1b24
avoid interferring side-effects in function calls
NikolajBjorner Aug 7, 2025
fcd3a70
remove theory_str and classes that are only used by it
NikolajBjorner Aug 8, 2025
baa0588
remove automata from python build
NikolajBjorner Aug 8, 2025
efb0bda
remove ref to theory_str
NikolajBjorner Aug 8, 2025
88293bf
get the finest factorizations before project
levnach Aug 7, 2025
8598a74
rename add_lcs to add_lc
levnach Aug 8, 2025
72757c4
resolve bad bug about l2g and g2l translators using wrong global cont…
ilanashapiro Aug 8, 2025
a9228f4
initial attempt at dynamically switching from greedy to frugal splitt…
ilanashapiro Aug 9, 2025
b7d5add
Update RELEASE_NOTES.md
NikolajBjorner Aug 10, 2025
8493c30
resolve bug about not translating managers correctly for the second p…
ilanashapiro Aug 11, 2025
2e6d95d
Merge branch 'Z3Prover:master' into parallel-solving
ilanashapiro Aug 11, 2025
e33dc47
remove unused square-free check
levnach Aug 11, 2025
7d57e6f
Merge branch 'ilana' into parallel-solving
NikolajBjorner Aug 11, 2025
6b3b8ac
add some debug prints and impelement internal polynomial fix
ilanashapiro Aug 11, 2025
ffb4bf1
merge
ilanashapiro Aug 11, 2025
de64a53
Merge branch 'parallel-solving' of github.com:ilanashapiro/z3 into pa…
ilanashapiro Aug 11, 2025
53eb2ca
Merge branch 'Z3Prover:master' into parallel-solving
ilanashapiro Aug 11, 2025
9373fec
Merge branch 'parallel-solving' of github.com:ilanashapiro/z3 into pa…
ilanashapiro Aug 11, 2025
ef61315
add some comments and debug m_assumptions_used
ilanashapiro Aug 11, 2025
a32b7ba
redo greedy->frugal strategy so we don't split on existing cubes in f…
ilanashapiro Aug 12, 2025
d864226
set up initial scaffolding for sharing clauses between threads and ba…
ilanashapiro Aug 12, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 0 additions & 28 deletions src/nlsat/nlsat_explain.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -618,34 +618,6 @@ namespace nlsat {
}
}

// The monomials have to be square free according to
//"An improved projection operation for cylindrical algebraic decomposition of three-dimensional space", by McCallum, Scott

bool is_square_free_at_sample(polynomial_ref_vector &ps, var x) {
polynomial_ref p(m_pm);
polynomial_ref lc_poly(m_pm);
polynomial_ref disc_poly(m_pm);

for (unsigned i = 0; i < ps.size(); i++) {
p = ps.get(i);
unsigned k_deg = m_pm.degree(p, x);
if (k_deg == 0)
continue;
// p depends on x
disc_poly = discriminant(p, x); // Use global helper
if (sign(disc_poly) == 0) { // Discriminant is zero
TRACE(nlsat_explain, tout << "p is not square free:\n ";
display(tout, p); tout << "\ndiscriminant: "; display(tout, disc_poly) << "\n";
m_solver.display_assignment(tout) << '\n';
m_solver.display_var(tout << "x:", x) << '\n';
);

return false;
}
}
return true;
}

// For each p in ps add the leading coefficent to the projection,
void add_lc(polynomial_ref_vector &ps, var x) {
polynomial_ref p(m_pm);
Expand Down
202 changes: 105 additions & 97 deletions src/smt/smt_parallel.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -43,19 +43,25 @@ namespace smt {
void parallel::worker::run() {
ast_translation g2l(p.ctx.m, m); // global to local context -- MUST USE p.ctx.m, not ctx->m, AS GLOBAL MANAGER!!!
ast_translation l2g(m, p.ctx.m); // local to global context
while (m.inc()) {
IF_VERBOSE(0, verbose_stream() << "Worker " << id << " checking cubes\n");
while (m.inc()) { // inc: increase the limit and check if it is canceled, vs m.limit().is_canceled() is readonly. the .limit() is also not necessary (m.inc() etc provides a convenience wrapper)
vector<expr_ref_vector> cubes;
b.get_cubes(g2l, cubes);
if (cubes.empty())
return;
for (auto& cube : cubes) {
if (!m.inc()) {
b.set_exception("context cancelled");
return; // stop if the main context (i.e. parent thread) is cancelled
return;
}
switch (check_cube(cube)) {
IF_VERBOSE(0, verbose_stream() << "Processing cube: " << mk_bounded_pp(mk_and(cube), m, 3) << "\n");
lbool r = check_cube(cube);
if (m.limit().is_canceled()) {
IF_VERBOSE(0, verbose_stream() << "Worker " << id << " context cancelled\n");
return;
}
switch (r) {
case l_undef: {
IF_VERBOSE(0, verbose_stream() << "Worker " << id << " found undef cube\n");
// return unprocessed cubes to the batch manager
// add a split literal to the batch manager.
// optionally process other cubes and delay sending back unprocessed cubes to batch manager.
Expand All @@ -66,7 +72,7 @@ namespace smt {
break;
}
case l_true: {
IF_VERBOSE(0, verbose_stream() << "Worker " << id << " found sat cube: " << mk_and(cube) << "\n");
IF_VERBOSE(0, verbose_stream() << "Worker " << id << " found sat cube\n");
model_ref mdl;
ctx->get_model(mdl);
b.set_sat(l2g, *mdl);
Expand All @@ -85,14 +91,15 @@ namespace smt {
b.set_unsat(l2g, unsat_core);
return;
}
for (expr * e : unsat_core)
for (expr* e : unsat_core)
if (asms.contains(e))
b.report_assumption_used(l2g, e); // report assumptions used in unsat core, so they can be used in final core

// TODO: can share lemmas here, such as new units and not(and(unsat_core)), binary clauses, etc.
IF_VERBOSE(0, verbose_stream() << "Worker " << id << " found unsat cube: " << mk_pp(mk_and(cube), m) << "\n");
b.share_lemma(l2g, mk_not(mk_and(unsat_core)));
// share_units();
// TODO: remember assumptions used in core so that they get used for the final core.
IF_VERBOSE(0, verbose_stream() << "Worker " << id << " found unsat cube\n");
b.collect_clause(l2g, id, mk_not(mk_and(unsat_core)));
share_units(l2g);
break;
}
}
Expand All @@ -111,62 +118,57 @@ namespace smt {
ctx->set_random_seed(id + m_smt_params.m_random_seed);
}

void parallel::worker::share_units() {
// obj_hashtable<expr> unit_set;
// expr_ref_vector unit_trail(ctx.m);
// unsigned_vector unit_lim;
// for (unsigned i = 0; i < num_threads; ++i) unit_lim.push_back(0);

// // we just want to share lemmas and have a way of remembering how they are shared -- this is the next step
// // (this needs to be reworked)
// std::function<void(void)> collect_units = [&,this]() {
// //return; -- has overhead
// for (unsigned i = 0; i < num_threads; ++i) {
// context& pctx = *pctxs[i];
// pctx.pop_to_base_lvl();
// ast_translation tr(pctx.m, ctx.m);
// unsigned sz = pctx.assigned_literals().size();
// for (unsigned j = unit_lim[i]; j < sz; ++j) {
// literal lit = pctx.assigned_literals()[j];
// //IF_VERBOSE(0, verbose_stream() << "(smt.thread " << i << " :unit " << lit << " " << pctx.is_relevant(lit.var()) << ")\n";);
// if (!pctx.is_relevant(lit.var()))
// continue;
// expr_ref e(pctx.bool_var2expr(lit.var()), pctx.m);
// if (lit.sign()) e = pctx.m.mk_not(e);
// expr_ref ce(tr(e.get()), ctx.m);
// if (!unit_set.contains(ce)) {
// unit_set.insert(ce);
// unit_trail.push_back(ce);
// }
// }
// }

// unsigned sz = unit_trail.size();
// for (unsigned i = 0; i < num_threads; ++i) {
// context& pctx = *pctxs[i];
// ast_translation tr(ctx.m, pctx.m);
// for (unsigned j = unit_lim[i]; j < sz; ++j) {
// expr_ref src(ctx.m), dst(pctx.m);
// dst = tr(unit_trail.get(j));
// pctx.assert_expr(dst); // Assert that the conjunction of the assumptions in this unsat core is not satisfiable — prune it from future search
// }
// unit_lim[i] = pctx.assigned_literals().size();
// }
// IF_VERBOSE(1, verbose_stream() << "(smt.thread :units " << sz << ")\n");
// };
void parallel::worker::share_units(ast_translation& l2g) {
// Collect new units learned locally by this worker and send to batch manager
unsigned sz = ctx->assigned_literals().size();
for (unsigned j = shared_clause_limit; j < sz; ++j) { // iterate only over new literals since last sync -- QUESTION: I THINK THIS IS BUGGY BECAUSE THE SHARED CLAUSE LIMIT IS ONLY UPDATED (FOR ALL CLAUSE TYPES) WHEN WE GATHER NEW SHARED UNITS
literal lit = ctx->assigned_literals()[j];
expr_ref e(ctx->bool_var2expr(lit.var()), ctx->m); // turn literal into a Boolean expression
if (lit.sign())
e = ctx->m.mk_not(e); // negate if literal is negative
b.collect_clause(l2g, id, e);
}
}

void parallel::batch_manager::share_lemma(ast_translation& l2g, expr* lemma) {
void parallel::batch_manager::collect_clause(ast_translation& l2g, unsigned source_worker_id, expr* clause) {
std::scoped_lock lock(mux);
expr_ref g_lemma(l2g(lemma), l2g.to());
p.ctx.assert_expr(g_lemma); // QUESTION: where does this get shared with the local thread contexts? -- doesn't right now, we will build the scaffolding for this later!
expr* g_clause = l2g(clause);
if (!shared_clause_set.contains(g_clause)) {
shared_clause_set.insert(g_clause);
SharedClause sc{source_worker_id, g_clause};
shared_clause_trail.push_back(sc);
}
}

// QUESTION -- WHERE SHOULD WE CALL THIS?
void parallel::worker::collect_shared_clauses(ast_translation& g2l) {
expr_ref_vector new_clauses = b.return_shared_clauses(g2l, shared_clause_limit, id); // get new clauses from the batch manager
// iterate over new clauses and assert them in the local context
for (expr* e : new_clauses) {
expr_ref local_clause(e, g2l.to()); // e was already translated to the local context in the batch manager!!
ctx->assert_expr(local_clause); // assert the clause in the local context
IF_VERBOSE(0, verbose_stream() << "Worker " << id << " asserting shared clause: " << mk_bounded_pp(local_clause, m, 3) << "\n");
}
}

// get new clauses from the batch manager and assert them in the local context
expr_ref_vector parallel::batch_manager::return_shared_clauses(ast_translation& g2l, unsigned& worker_limit, unsigned worker_id) {
expr_ref_vector result(g2l.to());
{
std::scoped_lock lock(mux);
for (unsigned i = worker_limit; i < shared_clause_trail.size(); ++i) {
if (shared_clause_trail[i].source_worker_id == worker_id)
continue; // skip clauses from the requesting worker
expr_ref local_clause(g2l(shared_clause_trail[i].clause), g2l.to());
result.push_back(local_clause);
}
worker_limit = shared_clause_trail.size(); // update the worker limit to the end of the current trail
}
return result;
}

// PUT THE LOGIC FOR LEARNING FROM UNSAT CORE HERE IF THE CUBE INTERSECTS WITH IT!!!
// THERE IS AN EDGE CASE: IF ALL THE CUBES ARE UNSAT, BUT DEPEND ON NONEMPTY ASSUMPTIONS, NEED TO TAKE THE UNION OF THESE ASMS WHEN LEARNING FROM UNSAT CORE
// DON'T CODE THIS CASE YET: WE ARE JUST TESTING WITH EMPTY ASMS FOR NOW (I.E. WE ARE NOT PASSING IN ASMS). THIS DOES NOT APPLY TO THE INTERNAL "LEARNED" UNSAT CORE
lbool parallel::worker::check_cube(expr_ref_vector const& cube) {
IF_VERBOSE(0, verbose_stream() << "Worker " << id << " checking cube\n";);
for (auto& atom : cube)
asms.push_back(atom);
lbool r = l_undef;
Expand All @@ -183,6 +185,7 @@ namespace smt {
b.set_exception("unknown exception");
}
asms.shrink(asms.size() - cube.size());
IF_VERBOSE(0, verbose_stream() << "Worker " << id << " DONE checking cube\n";);
return r;
}

Expand Down Expand Up @@ -219,6 +222,10 @@ namespace smt {
if (m_state != state::is_running)
return;
m_state = state::is_unsat;

// every time we do a check_sat call, don't want to have old info coming from a prev check_sat call
// the unsat core gets reset internally in the context after each check_sat, so we assert this property here
// takeaway: each call to check_sat needs to have a fresh unsat core
SASSERT(p.ctx.m_unsat_core.empty());
for (expr* e : unsat_core)
p.ctx.m_unsat_core.push_back(l2g(e));
Expand All @@ -245,7 +252,7 @@ namespace smt {

void parallel::batch_manager::report_assumption_used(ast_translation& l2g, expr* assumption) {
std::scoped_lock lock(mux);
m_used_assumptions.insert(l2g(assumption))
p.m_assumptions_used.insert(l2g(assumption));
}

lbool parallel::batch_manager::get_result() const {
Expand All @@ -255,10 +262,10 @@ namespace smt {
case state::is_running: // batch manager is still running, but all threads have processed their cubes, which means all cubes were unsat
if (!m_cubes.empty())
throw default_exception("inconsistent end state");
if (!m_assumptions_used.empty()) {
// collect unsat core from assumptions used, if any.
if (!p.m_assumptions_used.empty()) {
// collect unsat core from assumptions used, if any --> case when all cubes were unsat, but depend on nonempty asms, so we need to add these asms to final unsat core
SASSERT(p.ctx.m_unsat_core.empty());
for (auto a : m_assumptions_used)
for (auto a : p.m_assumptions_used)
p.ctx.m_unsat_core.push_back(a);
}
return l_false;
Expand Down Expand Up @@ -346,59 +353,59 @@ namespace smt {
std::scoped_lock lock(mux);
unsigned max_cubes = 1000;
bool greedy_mode = (m_cubes.size() <= max_cubes);
unsigned initial_m_cubes_size = m_cubes.size(); // cubes present before processing this batch
unsigned a_worker_start_idx = 0;

// --- Phase 1: Add worker cubes from C_worker and split each new cube on the existing atoms in A_batch (m_split_atoms) that aren't already in the new cube ---
//
// --- Phase 1: Greedy split of *existing* cubes on new A_worker atoms (greedy) ---
//
if (greedy_mode) {
for (; a_worker_start_idx < A_worker.size(); ++a_worker_start_idx) {
expr_ref g_atom(l2g(A_worker[a_worker_start_idx]), l2g.to());
if (m_split_atoms.contains(g_atom))
continue;
m_split_atoms.push_back(g_atom);

add_split_atom(g_atom, 0); // split all *existing* cubes
if (m_cubes.size() > max_cubes) {
greedy_mode = false;
++a_worker_start_idx; // start frugal from here
break;
}
}
}

unsigned initial_m_cubes_size = m_cubes.size(); // where to start processing the worker cubes after splitting the EXISTING cubes on the new worker atoms

// --- Phase 2: Process worker cubes (greedy) ---
for (auto& c : C_worker) {
expr_ref_vector g_cube(l2g.to());
for (auto& atom : c)
g_cube.push_back(l2g(atom));

unsigned start = m_cubes.size();
m_cubes.push_back(g_cube); // continuously update the start idx so we're just processing the single most recent cube
unsigned start = m_cubes.size(); // update start after adding each cube so we only process the current cube being added
m_cubes.push_back(g_cube);

if (greedy_mode) {
// Split new cube all existing m_split_atoms (i.e. A_batch) that aren't already in the cube
// Split new cube on all existing m_split_atoms not in it
for (auto g_atom : m_split_atoms) {
if (!atom_in_cube(g_cube, g_atom)) {
add_split_atom(g_atom, start);
if (m_cubes.size() > max_cubes) {
greedy_mode = false;
break; // stop splitting on older atoms, switch to frugal mode
break;
}
}
}
}
}

unsigned a_worker_start_idx = 0;

// --- Phase 2: Process split atoms from A_worker ---
if (greedy_mode) {
// Start as greedy: split all cubes on new atoms
for (; a_worker_start_idx < A_worker.size(); ++a_worker_start_idx) {
expr_ref g_atom(l2g(A_worker[a_worker_start_idx]), l2g.to());
if (m_split_atoms.contains(g_atom))
continue;
m_split_atoms.push_back(g_atom);

add_split_atom(g_atom, 0);
if (m_cubes.size() > max_cubes) {
greedy_mode = false;
++a_worker_start_idx; // Record where to start processing the remaining atoms for frugal processing, so there's no redundant splitting
break;
}
}
}

// --- Phase 3: Frugal fallback ---
// --- Phase 3: Frugal fallback: only process NEW worker cubes with NEW atoms ---
if (!greedy_mode) {
// Split only cubes added in *this call* on the new A_worker atoms (starting where we left off from the initial greedy phase)
for (unsigned i = a_worker_start_idx; i < A_worker.size(); ++i) {
expr_ref g_atom(l2g(A_worker[i]), l2g.to());
if (!m_split_atoms.contains(g_atom))
m_split_atoms.push_back(g_atom);
add_split_atom(g_atom, initial_m_cubes_size); // start from the initial size of m_cubes
add_split_atom(g_atom, initial_m_cubes_size);
}
}
}
Expand Down Expand Up @@ -439,13 +446,13 @@ namespace smt {

if (m.has_trace_stream())
throw default_exception("trace streams have to be off in parallel mode");

struct scoped_clear_table {
obj_hashtable& ht;
scoped_clear(obj_hashtable& ht) : ht(ht) {}
~scoped_clear() { ht.reset(); }
obj_hashtable<expr>& ht;
scoped_clear_table(obj_hashtable<expr>& ht) : ht(ht) {} // Constructor: Takes a reference to a hash table when the object is created and saves it.
~scoped_clear_table() { ht.reset(); } // Destructor: When the scoped_clear_table object goes out of scope, it automatically calls reset() on that hash table, clearing it
};
scoped_clear_table clear(m_batch_manager.m_used_assumptions);
scoped_clear_table clear(m_assumptions_used); // creates a scoped_clear_table named clear, bound to m_assumptions_used

{
m_batch_manager.initialize();
Expand All @@ -460,6 +467,7 @@ namespace smt {
// within the lexical scope of the code block, creates a data structure that allows you to push children
// objects to the limit object, so if someone cancels the parent object, the cancellation propagates to the children
// and that cancellation has the lifetime of the scope
// even if this code doesn't expliclty kill the main thread, still applies bc if you e.g. Ctrl+C the main thread, the children threads need to be cancelled
for (auto w : m_workers)
sl.push_child(&(w->limit()));

Expand All @@ -475,7 +483,7 @@ namespace smt {
for (auto& th : threads)
th.join();

for (auto w : m_workers)
for (auto w : m_workers)
w->collect_statistics(ctx.m_aux_stats);
}

Expand Down
Loading