Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions doc/changelog.d/3978.fixed.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
fix: eqslv not being properly converted.
257 changes: 131 additions & 126 deletions src/ansys/mapdl/core/_commands/solution/analysis_options.py
Original file line number Diff line number Diff line change
Expand Up @@ -1627,132 +1627,137 @@ def eqslv(self, lab="", toler="", mult="", keepfile="", **kwargs):
lab
Equation solver type:

SPARSE - Sparse direct equation solver. Applicable to
real-value or complex-value symmetric and
unsymmetric matrices. Available only for STATIC,
HARMIC (full method only), TRANS (full method
only), SUBSTR, and PSD spectrum analysis types
[ANTYPE]. Can be used for nonlinear and linear
analyses, especially nonlinear analysis where
indefinite matrices are frequently
encountered. Well suited for contact analysis
where contact status alters the mesh
topology. Other typical well-suited applications
are: (a) models consisting of shell/beam or
shell/beam and solid elements (b) models with a
multi-branch structure, such as an automobile
exhaust or a turbine fan. This is an alternative
to iterative solvers since it combines both speed
and robustness. Generally, it requires
considerably more memory (~10x) than the PCG
solver to obtain optimal performance (running
totally in-core). When memory is limited, the
solver works partly in-core and out-of-core,
which can noticeably slow down the performance of
the solver. See the BCSOPTION command for more
details on the various modes of operation for
this solver.

This solver can be run in shared memory parallel or
distributed memory parallel (Distributed ANSYS) mode. When
used in Distributed ANSYS, this solver preserves all of
the merits of the classic or shared memory sparse
solver. The total sum of memory (summed for all processes)
is usually higher than the shared memory sparse
solver. System configuration also affects the performance
of the distributed memory parallel solver. If enough
physical memory is available, running this solver in the
in-core memory mode achieves optimal performance. The
ideal configuration when using the out-of-core memory mode
is to use one processor per machine on multiple machines
(a cluster), spreading the I/O across the hard drives of
each machine, assuming that you are using a high-speed
network such as Infiniband to efficiently support all
communication across the multiple machines. - This solver
supports use of the GPU accelerator capability.

JCG - Jacobi Conjugate Gradient iterative equation
solver. Available only for STATIC, HARMIC (full
method only), and TRANS (full method only) analysis
types [ANTYPE]. Can be used for structural, thermal,
and multiphysics applications. Applicable for
symmetric, unsymmetric, complex, definite, and
indefinite matrices. Recommended for 3-D harmonic
analyses in structural and multiphysics
applications. Efficient for heat transfer,
electromagnetics, piezoelectrics, and acoustic field
problems.

This solver can be run in shared memory parallel or
distributed memory parallel (Distributed ANSYS) mode. When
used in Distributed ANSYS, in addition to the limitations
listed above, this solver only runs in a distributed
parallel fashion for STATIC and TRANS (full method)
analyses in which the stiffness is symmetric and only when
not using the fast thermal option (THOPT). Otherwise, this
solver runs in shared memory parallel mode inside
Distributed ANSYS. - This solver supports use of the GPU
accelerator capability. When using the GPU accelerator
capability, in addition to the limitations listed above,
this solver is available only for STATIC and TRANS (full
method) analyses where the stiffness is symmetric and does
not support the fast thermal option (THOPT).

ICCG - Incomplete Cholesky Conjugate Gradient iterative
equation solver. Available for STATIC, HARMIC (full
method only), and TRANS (full method only) analysis
types [ANTYPE]. Can be used for structural,
thermal, and multiphysics applications, and for
symmetric, unsymmetric, complex, definite, and
indefinite matrices. The ICCG solver requires more
memory than the JCG solver, but is more robust than
the JCG solver for ill-conditioned matrices.

This solver can only be run in shared memory parallel
mode. This is also true when the solver is used inside
Distributed ANSYS. - This solver does not support use of
the GPU accelerator capability.

QMR - Quasi-Minimal Residual iterative equation
solver. Available for the HARMIC (full method only)
analysis type [ANTYPE]. Can be used for
high-frequency electromagnetic applications, and for
symmetric, complex, definite, and indefinite
matrices. The QMR solver is more stable than the
ICCG solver.

This solver can only be run in shared memory parallel
mode. This is also true when the solver is used inside
Distributed ANSYS. - This solver does not support use of
the GPU accelerator capability.

PCG - Preconditioned Conjugate Gradient iterative equation
solver (licensed from Computational Applications and
Systems Integration, Inc.). Requires less disk file
space than SPARSE and is faster for large
models. Useful for plates, shells, 3-D models, large
2-D models, and other problems having symmetric,
sparse, definite or indefinite matrices for
nonlinear analysis. Requires twice as much memory
as JCG. Available only for analysis types [ANTYPE]
STATIC, TRANS (full method only), or MODAL (with PCG
Lanczos option only). Also available for the use
pass of substructure analyses (MATRIX50). The PCG
solver can robustly solve equations with constraint
equations (CE, CEINTF, CPINTF, and CERIG). With
this solver, you can use the MSAVE command to obtain
a considerable memory savings.

The PCG solver can handle ill-conditioned problems by
using a higher level of difficulty (see
PCGOPT). Ill-conditioning arises from elements with high
aspect ratios, contact, and plasticity. - This solver can
be run in shared memory parallel or distributed memory
parallel (Distributed ANSYS) mode. When used in
Distributed ANSYS, this solver preserves all of the merits
of the classic or shared memory PCG solver. The total sum
of memory (summed for all processes) is about 30% more
than the shared memory PCG solver.
SPARSE
Sparse direct equation solver. Applicable to
real-value or complex-value symmetric and
unsymmetric matrices. Available only for STATIC,
HARMIC (full method only), TRANS (full method
only), SUBSTR, and PSD spectrum analysis types
[ANTYPE]. Can be used for nonlinear and linear
analyses, especially nonlinear analysis where
indefinite matrices are frequently
encountered. Well suited for contact analysis
where contact status alters the mesh
topology. Other typical well-suited applications
are: (a) models consisting of shell/beam or
shell/beam and solid elements (b) models with a
multi-branch structure, such as an automobile
exhaust or a turbine fan. This is an alternative
to iterative solvers since it combines both speed
and robustness. Generally, it requires
considerably more memory (~10x) than the PCG
solver to obtain optimal performance (running
totally in-core). When memory is limited, the
solver works partly in-core and out-of-core,
which can noticeably slow down the performance of
the solver. See the BCSOPTION command for more
details on the various modes of operation for
this solver.

This solver can be run in shared memory parallel or
distributed memory parallel (Distributed ANSYS) mode. When
used in Distributed ANSYS, this solver preserves all of
the merits of the classic or shared memory sparse
solver. The total sum of memory (summed for all processes)
is usually higher than the shared memory sparse
solver. System configuration also affects the performance
of the distributed memory parallel solver. If enough
physical memory is available, running this solver in the
in-core memory mode achieves optimal performance. The
ideal configuration when using the out-of-core memory mode
is to use one processor per machine on multiple machines
(a cluster), spreading the I/O across the hard drives of
each machine, assuming that you are using a high-speed
network such as Infiniband to efficiently support all
communication across the multiple machines. - This solver
supports use of the GPU accelerator capability.

JCG
Jacobi Conjugate Gradient iterative equation
solver. Available only for STATIC, HARMIC (full
method only), and TRANS (full method only) analysis
types [ANTYPE]. Can be used for structural, thermal,
and multiphysics applications. Applicable for
symmetric, unsymmetric, complex, definite, and
indefinite matrices. Recommended for 3-D harmonic
analyses in structural and multiphysics
applications. Efficient for heat transfer,
electromagnetics, piezoelectrics, and acoustic field
problems.

This solver can be run in shared memory parallel or
distributed memory parallel (Distributed ANSYS) mode. When
used in Distributed ANSYS, in addition to the limitations
listed above, this solver only runs in a distributed
parallel fashion for STATIC and TRANS (full method)
analyses in which the stiffness is symmetric and only when
not using the fast thermal option (THOPT). Otherwise, this
solver runs in shared memory parallel mode inside
Distributed ANSYS. - This solver supports use of the GPU
accelerator capability. When using the GPU accelerator
capability, in addition to the limitations listed above,
this solver is available only for STATIC and TRANS (full
method) analyses where the stiffness is symmetric and does
not support the fast thermal option (THOPT).

ICCG
Incomplete Cholesky Conjugate Gradient iterative
equation solver. Available for STATIC, HARMIC (full
method only), and TRANS (full method only) analysis
types [ANTYPE]. Can be used for structural,
thermal, and multiphysics applications, and for
symmetric, unsymmetric, complex, definite, and
indefinite matrices. The ICCG solver requires more
memory than the JCG solver, but is more robust than
the JCG solver for ill-conditioned matrices.

This solver can only be run in shared memory parallel
mode. This is also true when the solver is used inside
Distributed ANSYS. - This solver does not support use of
the GPU accelerator capability.

QMR
Quasi-Minimal Residual iterative equation
solver. Available for the HARMIC (full method only)
analysis type [ANTYPE]. Can be used for
high-frequency electromagnetic applications, and for
symmetric, complex, definite, and indefinite
matrices. The QMR solver is more stable than the
ICCG solver.

This solver can only be run in shared memory parallel
mode. This is also true when the solver is used inside
Distributed ANSYS. - This solver does not support use of
the GPU accelerator capability.

PCG
Preconditioned Conjugate Gradient iterative equation
solver (licensed from Computational Applications and
Systems Integration, Inc.). Requires less disk file
space than SPARSE and is faster for large
models. Useful for plates, shells, 3-D models, large
2-D models, and other problems having symmetric,
sparse, definite or indefinite matrices for
nonlinear analysis. Requires twice as much memory
as JCG. Available only for analysis types [ANTYPE]
STATIC, TRANS (full method only), or MODAL (with PCG
Lanczos option only). Also available for the use
pass of substructure analyses (MATRIX50). The PCG
solver can robustly solve equations with constraint
equations (CE, CEINTF, CPINTF, and CERIG). With
this solver, you can use the MSAVE command to obtain
a considerable memory savings.

The PCG solver can handle ill-conditioned problems by
using a higher level of difficulty (see
PCGOPT). Ill-conditioning arises from elements with high
aspect ratios, contact, and plasticity. - This solver can
be run in shared memory parallel or distributed memory
parallel (Distributed ANSYS) mode. When used in
Distributed ANSYS, this solver preserves all of the merits
of the classic or shared memory PCG solver. The total sum
of memory (summed for all processes) is about 30% more
than the shared memory PCG solver.

toler
Iterative solver tolerance value. Used only with the
Expand Down
3 changes: 2 additions & 1 deletion src/ansys/mapdl/core/convert.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,14 +68,15 @@
"ASBL": (), # ASBL,
"ATAN": (), # ATAN,
"BCSO": (), # BCSOPTION,
"CORI": (), # CORIOLIS
"CDRE": (), # CDREAD
"CLOG": (), # CLOG,
"CONJ": (), # CONJUG,
"CORI": (), # CORIOLIS
"DERI": (), # DERIV,
"DSPO": (), # DSPOPTION,
"ENER": (), # ENERSOL,
"ENSY": (), # ENSYM,
"EQSL": (), # EQSLV
"ESYM": (), # ESYM,
"EXP": (), # EXP,
"EXPA": (), # EXPAND,
Expand Down
Loading