Conversation
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
shangtai
left a comment
There was a problem hiding this comment.
Thanks for this. updated poetry.lock.
We will have to fix some bugs first :)
************* Module qiboopt.integrations.qiboml_adapter
src/qiboopt/integrations/qiboml_adapter.py:19:4: E0401: Unable to import 'qiboml.operations.differentiation' (import-error)
************* Module qiboopt.opt_class.opt_class
src/qiboopt/opt_class/opt_class.py:838:23: E0606: Possibly using variable 'params' before assignment (possibly-used-before-assignment)
src/qiboopt/opt_class/opt_class.py:857:16: E0606: Possibly using variable 'best' before assignment (possibly-used-before-assignment)
src/qiboopt/opt_class/opt_class.py:859:16: E0606: Possibly using variable 'extra' before assignment (possibly-used-before-assignment)
Error: Process completed with exit code 2.
for more information, see https://pre-commit.ci
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #80 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 3 4 +1
Lines 509 603 +94
=========================================
+ Hits 509 603 +94
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
for more information, see https://pre-commit.ci
Summary
This PR integrates
qibomlas an optional training engine forQUBO.train_QAOA, while keeping the existing qibo optimizer path as default (engine="legacy").The goal is to enable gradient-based QAOA optimization through qiboml/torch with minimal API disruption and optional dependency installation.
What Changed
src/qiboopt/integrations/qiboml_adapter.pyoptimize_qaoa_with_qiboml(...)using qiboml's PyTorchQuantumModelpsr,jax,adjoint, default torch path)adam,sgdQUBO.train_QAOAto support multi-engine trainingengine,optimizer,lr,epochs,differentiationengine="legacy"remains default and preserves previous behaviorengine="qiboml"uses qiboml adapter for regular-loss optimizationregular_loss=False) continues through legacy pathnshots=Noneornshots=0now runs exact modenshots > 0): bitstring countsnshots is None or 0): bitstring probabilitiespyproject.tomlupdated with optionalqibomlandtorchBackward Compatibility
engine="legacy").engine="qiboml"is selected.