Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

Releases: IBM/simulai

0.99.28

13 Feb 12:43
27f7416
Compare
Choose a tag to compare

0.99.28

February 12, 2024

0.99.27

14 Nov 14:20
Compare
Choose a tag to compare

This is a modest upgrade. The modifications are basically:

  • A class Tokenizer (simulai.io.Tokenizer) can be used for creating tokenized
    datasets to be used in combination with transformers networks. This class is also
    easily extensible.
  • An API based on torchview called view_api (on simulai.utilities.view_api)
    can be used for facilitating the visualization of neural network model during its deploying stage. See the tutorial for more information.
  • Minor glitches.
  • New documentation based on MKDocs.

0.99.26

26 Oct 16:55
Compare
Choose a tag to compare

0.99.26

  • SPFile can save models kwargs arguments in pickle files.
  • DeepONets now can estimate bias for the last layer.
  • DeepONets can have decoders in the output.
  • All the models (defined under simulai.models) can explicitly
    disable the reference to the device being used, which is a basic step for allowing its usage together with PyTorch Lightning.
  • Transformers available in simulai.models (simulai/models/_pytorch_models/_transformer.py).
  • U-Nets available in simulai.models (simulai/models/_pytorch_models/_unet.py).
  • Bugs corrections.

0.99.25

15 Sep 12:51
b272449
Compare
Choose a tag to compare
  • SplitPool framework for a simple approach of combining multiple networks in a divide-and-conquer way.
  • First experiments with classification tasks with the introduction of the BCELoss wrapper.

0.99.24

21 Jul 17:47
Compare
Choose a tag to compare
  • Multi-scale Variational Autoencoder (simulai.models.MultiScaleAutoencoder).
  • CNN-DeepONet (simulai.workflows.ConvDeepONet)
  • Basic multi-fidelity network architecture (simulai.models.MultiNetwork).

0.99.23

16 Jun 14:18
Compare
Choose a tag to compare
  • Weights adjusters for balancing the loss function terms are now supported with the state-of-art approaches Learning Rate Annealing (AnnealingWeights) and Inverse Dirichlet (InverseDirichletWeights).
  • The relative loss option (option enabled via the relative=True argument passed to the loss function dictionary) now also deals with null or very small numbers during the optimization stage.
  • A new option for using logarithimic data-driven losses is now enabled via the argument use_data_log=True

0.99.22

26 May 17:12
Compare
Choose a tag to compare
  • Models can be restored and made non-trainable:
net.detach_parameters()
  • Loss functions history is stored in dictionaries loss_states within instances of the class simulai.optimization.Optimizer for all the cases:

  • Bug fixes.

0.99.21

12 May 17:21
dfb8e8b
Compare
Choose a tag to compare
  • Global dtype detection for arrays and tensors used in neural network models:
In [1]: import torch

In [2]: from simulai import ARRAY_DTYPE

In [3]: ARRAY_DTYPE
Out[3]: numpy.float32
In [1]: import torch

In [2]: torch.set_default_dtype(torch.float64)

In [3]: from simulai import ARRAY_DTYPE

In [4]: ARRAY_DTYPE
Out[4]: numpy.float64
  • Bug fixes.

0.99.20

28 Apr 19:25
Compare
Choose a tag to compare
  • When using Physics-informed DeepONets, it is possible to use either trunk and branch input variables as inputs for the symbolic expressions, as seen below:
residual = SymbolicOperator(                                                  
    expressions=[f],
    input_vars=input_labels,
    output_vars=output_labels,
    function=manufactured_net,
    inputs_key="input_trunk|input_branch:0|input_branch:1",
    constants={"pi":np.pi},
    device="gpu",
    engine="torch",
)

The argument inputs_key defines that the inputs for the symbolic expressions corresponds to the concatenation among input_trunk and the first and the second columns of input_branch.

  • Communication with SciPy optimizers is now enabled, as can be seen the code snippet below:
from simulai.optimization import PIRMSELoss, ScipyInterface

loss_instance = PIRMSELoss(operator=net)

optimizer_lbfgs = ScipyInterface(
    fun=net, optimizer="L-BFGS-B", loss=loss_instance, loss_config=params
)

optimizer_lbfgs.fit(input_data=data)

See this example for more details.

  • Batch-normalization can be set for automatically generated AutoEncoders using the boolean argument use_batch_norm:
autoencoder = AutoencoderVariational(
        input_dim=(None, 1, 64, 128),
        latent_dim=8,
        activation="tanh",
        architecture="cnn",
        case="2d",
        use_batch_norm=True,
    )

0.99.19

14 Apr 19:31
Compare
Choose a tag to compare
  • Bug fixes.
  • Making some arguments default for avoiding unnecessary configurations.