Autograd support#

The algebra of symmetric tensors is composed from usual tensor algebra now operating over many blocks (dense tensors). Hence, by composition, the operations on symmetric tensors can be straightforwardly differentiated if the individual operations on dense tensors support autograd.

YASTN supports autograd through selected backends which provide this capability for dense tensor algebra, for example PyTorch backend.

You can activate autograd on YASTN tensor

Tensor.requires_grad_(requires_grad=True) Never#

Activate or deactivate recording of operations on the tensor for automatic differentiation.

Parameters:

requires_grad (bool) – if True activates autograd.

The operations on tensor are then recorded for later differentiation.

    def test_requires_grad(self):
        #
        # create a random U1 symmetric tensor. By default, such tensor
        # does not have autograd active. Activate it
        #
        leg1 = yastn.Leg(config_U1, s=1, t=(-1, 0, 1), D=(2, 3, 4))
        leg2 = yastn.Leg(config_U1, s=1, t=(-1, 1, 2), D=(2, 4, 5))
        a = yastn.rand(config=config_U1, legs=[leg1, leg1.conj(), leg2.conj(), leg2])
        assert not a.requires_grad
        a.requires_grad_(True)

        #
        # verify, that outputs of functions operating on tensor a return
        # tensors, which also have autograd active
        b = yastn.rand(config=config_U1, legs=[leg1, leg1.conj()])
        c = yastn.tensordot(a, b, axes=((2, 3), (0, 1)))
        assert c.requires_grad