Releases: IBM/simulai
Releases · IBM/simulai
0.99.28
0.99.28
February 12, 2024
- Basic Neural Implicit Flow (NIF).
- Tokenizer for DeepONets.
- Some bugs related to Wavelet activation running on GPUs were fixed.
0.99.27
This is a modest upgrade. The modifications are basically:
- A class Tokenizer (
simulai.io.Tokenizer
) can be used for creating tokenized
datasets to be used in combination with transformers networks. This class is also
easily extensible. - An API based on torchview called
view_api
(onsimulai.utilities.view_api
)
can be used for facilitating the visualization of neural network model during its deploying stage. See the tutorial for more information. - Minor glitches.
- New documentation based on MKDocs.
0.99.26
0.99.26
- SPFile can save models kwargs arguments in pickle files.
- DeepONets now can estimate bias for the last layer.
- DeepONets can have decoders in the output.
- All the models (defined under
simulai.models
) can explicitly
disable the reference to the device being used, which is a basic step for allowing its usage together with PyTorch Lightning. - Transformers available in
simulai.models
(simulai/models/_pytorch_models/_transformer.py
). - U-Nets available in
simulai.models
(simulai/models/_pytorch_models/_unet.py
). - Bugs corrections.
0.99.25
0.99.24
- Multi-scale Variational Autoencoder (
simulai.models.MultiScaleAutoencoder
). - CNN-DeepONet (
simulai.workflows.ConvDeepONet
) - Basic multi-fidelity network architecture (
simulai.models.MultiNetwork
).
0.99.23
- Weights adjusters for balancing the loss function terms are now supported with the state-of-art approaches Learning Rate Annealing (
AnnealingWeights
) and Inverse Dirichlet (InverseDirichletWeights
). - The relative loss option (option enabled via the
relative=True
argument passed to the loss function dictionary) now also deals with null or very small numbers during the optimization stage. - A new option for using logarithimic data-driven losses is now enabled via the argument
use_data_log=True
0.99.22
- Models can be restored and made non-trainable:
net.detach_parameters()
-
Loss functions history is stored in dictionaries
loss_states
within instances of the classsimulai.optimization.Optimizer
for all the cases: -
Bug fixes.
0.99.21
- Global dtype detection for arrays and tensors used in neural network models:
In [1]: import torch
In [2]: from simulai import ARRAY_DTYPE
In [3]: ARRAY_DTYPE
Out[3]: numpy.float32
In [1]: import torch
In [2]: torch.set_default_dtype(torch.float64)
In [3]: from simulai import ARRAY_DTYPE
In [4]: ARRAY_DTYPE
Out[4]: numpy.float64
- Bug fixes.
0.99.20
- When using Physics-informed DeepONets, it is possible to use either trunk and branch input variables as inputs for the symbolic expressions, as seen below:
residual = SymbolicOperator(
expressions=[f],
input_vars=input_labels,
output_vars=output_labels,
function=manufactured_net,
inputs_key="input_trunk|input_branch:0|input_branch:1",
constants={"pi":np.pi},
device="gpu",
engine="torch",
)
The argument inputs_key
defines that the inputs for the symbolic expressions corresponds to the concatenation among input_trunk and the first and the second columns of input_branch.
- Communication with SciPy optimizers is now enabled, as can be seen the code snippet below:
from simulai.optimization import PIRMSELoss, ScipyInterface
loss_instance = PIRMSELoss(operator=net)
optimizer_lbfgs = ScipyInterface(
fun=net, optimizer="L-BFGS-B", loss=loss_instance, loss_config=params
)
optimizer_lbfgs.fit(input_data=data)
See this example for more details.
- Batch-normalization can be set for automatically generated AutoEncoders using the boolean argument
use_batch_norm
:
autoencoder = AutoencoderVariational(
input_dim=(None, 1, 64, 128),
latent_dim=8,
activation="tanh",
architecture="cnn",
case="2d",
use_batch_norm=True,
)
0.99.19
- Bug fixes.
- Making some arguments default for avoiding unnecessary configurations.