Algebra with YASTN tensors#
Basic algebra operations with symmetric tensors#
See examples at Basic algebra operations.
Symmetric tensors can be added and multiplied by a scalar
through usual operations +
, -
, *
, /
.
Element-wise raising to a power is done by the standard power operation **
.
Simple element-wise operations#
- Tensor.__abs__() yastn.Tensor #
Return tensor with element-wise absolute values.
Can be on called on tensor as
abs(tensor)
.
- Tensor.real() yastn.Tensor #
Return tensor with imaginary part set to zero.
Note
Follows the behavior of the
backend.real()
when it comes to creating a new copy of the data or handling datatypedtype
.
- Tensor.imag() yastn.Tensor #
Return tensor with real part set to zero.
Note
Follows the behavior of the
backend.imag()
when it comes to creating a new copy of the data or handling datatypedtype
.
- Tensor.sqrt() yastn.Tensor #
Return tensor after applying element-wise square root for each tensor element.
- Tensor.rsqrt(cutoff=0) yastn.Tensor #
Return element-wise operation 1/sqrt(tensor).
The tensor elements with absolute value below the cutoff are set to zero.
- Parameters:
cutoff (real scalar) – (element-wise) cutoff for inversion
- Tensor.reciprocal(cutoff=0) yastn.Tensor #
Return element-wise operation 1/tensor.
The tensor elements with absolute value below the cutoff are set to zero.
- Parameters:
cutoff (real scalar) – (element-wise) cutoff for inversion
- Tensor.exp(step=1.0) yastn.Tensor #
Return element-wise exp(step * tensor).
Note
This applies only to non-empty blocks of tensor
- Tensor.__mul__(number) yastn.Tensor #
Multiply tensor by a number, use: number * tensor.
- Tensor.__pow__(exponent) yastn.Tensor #
Element-wise exponent of tensor, use: tensor ** exponent.
- Tensor.__truediv__(number) yastn.Tensor #
Divide tensor by a scalar, use: tensor / number.
- yastn.Tensor.__add__(a, b) yastn.Tensor #
Add two tensors, use: \(a + b\).
Signatures and total charges of two tensors should match.
- yastn.Tensor.__sub__(a, b) yastn.Tensor #
Subtract two tensors, use: \(a - b\).
Signatures and total charges of two tensors should match.
- yastn.linear_combination(*tensors, amplitudes=None, **kwargs)[source]#
Linear combination of tensors with given amplitudes, \(\sum_i amplitudes[i] tensors[i]\).
- Parameters:
tensors (Sequence[yastn.Tensor]) – Signatures and total charges of all tensors should match.
amplitudes (None | Sequence[Number]) – If
None
, all amplitudes are assumed to be one. Otherwise, The number of tensors and amplitudes should be the same. Individual amplitude can beNone
, which gives the same result as1
but without an extra multiplication.
- yastn.Tensor.__lt__(a, number) yastn.Tensor[bool] #
Logical tensor with elements less-than a number (if it makes sense for backend data tensors), use: mask = tensor < number
Intended for diagonal tensor to be applied as a truncation mask.
- yastn.Tensor.__gt__(a, number) yastn.Tensor[bool] #
Logical tensor with elements greater-than a number (if it makes sense for backend data tensors), use: mask = tensor > number
Intended for diagonal tensor to be applied as a truncation mask.
- yastn.Tensor.__le__(a, number) yastn.Tensor[bool] #
Logical tensor with elements less-than-or-equal-to a number (if it makes sense for backend data tensors), use: mask = tensor <= number
Intended for diagonal tensor to be applied as a truncation mask.
- yastn.Tensor.__ge__(a, number) yastn.Tensor[bool] #
Logical tensor with elements greater-than-or-equal-to a number (if it makes sense for backend data tensors), use: mask = tensor >= number
Intended for diagonal tensor to be applied as a truncation mask.
- yastn.Tensor.bitwise_not(a) yastn.Tensor[bool] #
Return tensor after applying bitwise not on each tensor element.
Note
Operation applies only to non-empty blocks of tensor with tensor data dtype that allows for bitwise operation, i.e. intended for masks used to truncate tensor legs.
- yastn.allclose(a, b, rtol=1e-13, atol=1e-13) bool [source]#
Check if a and b are identical within a desired tolerance. To be
True
, all tensors’ blocks and merge history have to be identical. If this condition is satisfied, executebackend.allclose
function to compare tensors’ data.Note that if two tenors differ by zero blocks, the function returns
False
. To resolve such differences, use(a - b).norm() < tol
- Parameters:
a, b (yastn.Tensor) – Tensor for comparison.
rtol, atol (float) – Desired relative and absolute precision.
Tensor contractions#
See examples at Tensor contractions.
Tensor contractions are the main building block of tensor network algorithms. Functions below facilitate the computation of
Trace: \(B_{jl}= \sum_{i} T_{ijil}\) or using Einstein’s summation convention for repeated indices \(B_{jl} = T_{ijil}\).
Contractions: in the usual form \(C_{abc} = A_{aijb}{\times}B_{cij}\) and also outer products \(M_{abkl} = A_{ak}{\times}B_{bl}\)
or composition of such operations over several tensors.
- Tensor.__matmul__(b) yastn.Tensor #
The operation
A @ B
uses@
operator to compute tensor dot product. The operation contracts the last axis ofself
, i.e.,a
, with the first axis ofb
.It is equivalent to
yastn.tensordot(a, b, axes=(a.ndim - 1, 0))
.
- yastn.tensordot(a, b, axes, conj=(0, 0)) yastn.Tensor [source]#
Compute tensor dot product of two tensors along specified axes.
Outgoing legs are ordered such that first ones are the remaining legs of the first tensor in the original order, followed by the remaining legs of the second tensor in the original order.
- Parameters:
a, b (yastn.Tensor) – Tensors to contract.
axes (tuple[int, int] | tuple[Sequence[int], Sequence[int]]) – legs of both tensors to be contracted (for each, they are specified by int or tuple of ints) e.g.
axes=(0, 3)
to contract 0th leg ofa
with 3rd leg ofb
;axes=((0, 3), (1, 2))
to contract legs 0 and 3 ofa
with 1 and 2 ofb
, respectively.conj (tuple[int, int]) – specify tensors to conjugate by:
(0, 0)
,(0, 1)
,(1, 0)
, or(1, 1)
. The default is(0, 0)
, i.e., neither tensor is conjugated.
- yastn.vdot(a, b, conj=(1, 0)) Number [source]#
Compute scalar product \(\langle a|b \rangle\) between two tensors.
- Parameters:
a, b (yastn.Tensor) – Tensors to contract.
conj (tuple[int, int]) – indicate which tensors to conjugate:
(0, 0)
,(0, 1)
,(1, 0)
, or(1, 1)
. The default is(1, 0)
, i.e., tensora
is conjugated.
- yastn.broadcast(a, *args, axes=0) yastn.Tensor | tuple[yastn.Tensor] [source]#
Compute tensordot product of diagonal tensor
a
with tensors inargs
.Produce diagonal tensor if both are diagonal. Legs of the resulting tensors are ordered in the same way as those of tensors in
args
. It is used (in combination withyastn.transpose()
) as a subrutine ofyastn.tensordot()
for contractions involving diagonal tensor.- Parameters:
a, args (yastn.Tensor) –
a
is diagonal tensor to be broadcasted.axes (int | Sequence[int]) – legs of tensors in
args
to be multiplied by diagonal tensora
. Number of tensors provided inargs
should match the length ofaxes
.
- yastn.apply_mask(a, *args, axes=0) yastn.Tensor | tuple[yastn.Tensor] [source]#
Apply mask given by nonzero elements of diagonal tensor
a
on specified axes of tensors in args. Number of tensors inargs
is not restricted. The length of the listaxes
has to be mathing withargs
.Legs of resulting tensor are ordered in the same way as those of tensors in
args
. Bond dimensions of specifiedaxes
ofargs
are truncated according to the maska
. Produce diagonal tensor if both are diagonal.- Parameters:
a, args (yastn.Tensor) –
a
is a diagonal tensoraxes (int | Sequence[int]) – leg of tensors in
args
where the mask is applied.
- yastn.trace(a, axes=(0, 1)) yastn.Tensor [source]#
Compute trace of legs specified by axes.
- Parameters:
axes (tuple[int, int] | tuple[Sequence[int], Sequence[int]]) – Legs to be traced out, e.g.,
axes=(0, 1)
; oraxes=((2, 3, 4), (0, 1, 5))
.
- yastn.einsum(subscripts, *operands, order=None) yastn.Tensor [source]#
Execute series of tensor contractions.
Covering trace, tensordot (including outer products), and transpose. Follows notation of
np.einsum()
as close as possible.- Parameters:
subscripts (str)
operands (Sequence[yastn.Tensor])
order (str) – Specify order in which repeated indices from subscipt are contracted. By default it follows alphabetic order.
Example
yastn.einsum('*ij,jh->ih', t1, t2) # matrix-matrix multiplication, where the first matrix is conjugated. # Equivalent to t1.conj() @ t2 yastn.einsum('ab,al,bm->lm', t1, t2, t3, order='ba') # Contract along `b` first, and `a` second.
- yastn.ncon(ts, inds, conjs=None, order=None) yastn.Tensor [source]#
Execute series of tensor contractions.
- Parameters:
ts (Sequence[yastn.Tensor]) – list of tensors to be contracted.
inds (Sequence[Sequence[int]]) – each inner tuple labels legs of respective tensor with integers. Positive values label legs to be contracted, with pairs of legs to be contracted denoted by the same integer label. Non-positive numbers label legs of the resulting tensor, in reversed order.
conjs (Sequence[int]) – For each tensor in
ts
contains either0
or1
. If the value is1
, the tensor is conjugated.order (Sequence[int]) – Order in which legs, marked by positive indices in inds, are contracted. If None, the legs are contracted following an ascending indices order. The default is None.
Note
yastn.ncon()
andyastn.einsum()
differ only by syntax.Example
# matrix-matrix multiplication where the first matrix is conjugated yastn.ncon([a, b], ((-0, 1), (1, -1)), conjs=(1, 0)) # outer product yastn.ncon([a, b], ((-0, -2), (-1, -3)))
- yastn.swap_gate(a, axes, charge=None) yastn.Tensor [source]#
Return tensor after application of a swap gate.
The function’s action is controlled by the
fermionic
flag in the tensor config. Multiply blocks with odd charges on swapped legs by \(-1\). Thefermionic
flag selects which individual charges (in case of a direct product of a few symmetries) are tested for oddity, where the contributions from each selected charge get multiplied. Seeyastn.operators.SpinfulFermions
for an example. Forfermionic=True
, all charges are considered. Forfermionic=False
, swap_gate returnsa
.- Parameters:
axes (Sequence[int | Sequence[int]]) – Tuple with groups of legs. Consecutive pairs of grouped legs that are to be swapped. For instance,
axes = (0, 1)
apply swap gate between 0th and 1st leg.axes = ((0, 1), (2, 3), 4, 5)
swaps(0, 1)
with(2, 3)
, and4
with5
.charge (Optional[Sequence[int]]) – If provided, the swap gate is applied between a virtual one-dimensional leg of specified charge, e.g., a fermionic string, and tensor legs specified in axes. In this case, there is no application of a swap gates between legs specified in axes.
Transposition#
See examples at Transposition.
- yastn.transpose(a, axes=None) yastn.Tensor [source]#
Transpose tensor by permuting the order of its legs (spaces). Makes a shallow copy of tensor data if the order is not changed.
- Parameters:
axes (Sequence[int]) – new order of legs. Has to be a valid permutation of
(0, 1, ..., ndim-1)
wherendim
is tensor order (number of legs). By default isrange(a.ndim)[::-1]
, which reverses the order of the axes.
- property Tensor.T: yastn.Tensor#
Same as
self.transpose()
.
- yastn.moveaxis(a, source, destination) yastn.Tensor [source]#
Change the position of an axis (or a group of axes) of the tensor. This is a convenience function for subset of possible permutations. It computes the corresponding permutation and calls
yastn.transpose()
.Makes a shallow copy of tensor data if the order is not changed.
- Parameters:
source, destination (int | Sequence[int])
Fusion of legs (reshaping)#
See examples at Fusion (reshaping).
Fusion of several vector spaces \(V_1,V_2,\ldots,V_n\) creates a new vector space as direct product \(W=V_1 \otimes V_2 \otimes \ldots \otimes V_n\), which is then indexed by a single index of dimension \(\prod_i {\rm dim}(V_i)\). Here multiplication depends on abelian symmetry, as the resulting total dimension is a sum of dimensions for effective charges. The inverse operation can split the fused space into its original constituents.
For dense tensors, the operation corresponds to reshaping.
Fusion can be used to vary compression between (unfused) symmetric tensors with many small non-zero blocks and tensors with several fused spaces having just few, but large non-zero blocks.
- Tensor.fuse_legs(axes, mode=None) yastn.Tensor #
Fuse groups of legs into effective legs, reducing the rank of the tensor.
Note
Fusion can be reverted back by
yastn.Tensor.unfuse_legs()
First, the legs are permuted into desired order. Then, selected groups of consecutive legs are fused. The desired order of the legs is given by a tuple
axes
of leg indices where the groups of legs to be fused are denoted by inner tuplesaxes=(0,1,(2,3,4),5) keep leg order, fuse legs (2,3,4) into new leg -> (0,1, 2, 3) __ __ 0--| |--3 0--| |--3<-5 1--| |--4 => 1--|__| 2--|__|--5 | 2<-(2,3,4) axes=(2,(3,1),4,(7,6),5,0) permute indices, then fuse legs (3,1) into new leg -> (0, 1, 2, 3, 4,5) and legs (7,6) into another new leg __ __ 0--| |--4 0->5--| |--2<-4 1--| |--5 => | |--4<-5 2--| |--6 2->0--| |--3<-(7,6) 3--|__|--7 (3,1)->1--|__|
Two types of fusion are supported: meta and hard:
'hard'
changes both the structure and data by aggregating smaller blocks into larger ones. Such fusion allows to balance number of non-zero blocks and typical block size.'meta'
performs the fusion only at the level of syntax, where it operates as a tensor with lower rank. Tensor structure and data (blocks) are not affected - apart from a transpose that may be needed for consistency.
It is possible to use both meta and hard fusion of legs on the same tensor. Applying hard fusion on tensor turns all previous meta fused legs into hard fused ones.
- Parameters:
axes (Sequence[int | Sequence[int]]) – tuple of leg indices. Groups of legs to be fused together are accumulated within inner tuples.
mode (str) – can select
'hard'
or'meta'
fusion. IfNone
, usesdefault_fusion
from tensor’s configuration. Configuration optionforce_fusion
can be used to overridemode
(introduced for debugging purposes).
- Tensor.unfuse_legs(axes) yastn.Tensor #
Unfuse legs, reverting one layer of fusion.
If the tensor has been obtained by fusing some legs together, unfuse_legs can revert such fusion. The legs to be unfused are passed in
axes
as int or tuple[int] in case of more legs to be unfused. The unfused legs are inserted at the positions of the fused legs. The remaining legs are shifted accordinglyaxes=2 unfuse leg 2 into legs 2,3,4 -> (0,1,2,3,4,5) __ __ 0--| |--3 0--| |--3 1--|__| => 1--| |--4 | 2--|__|--5<-3 2=(2,3,4) axes=( 2, 5 ) unfuse leg 2 into legs 2,3 and leg 5 into legs 6,7 -> (0,1,2,3,4,5,6,7) __ __ 0--| |--3 0--| |--4 | |--4 => 1--| |--5 1--| |--5=(6,7) 2--| |--6 (2,3)=2--|__| 3--|__|--7
Unfusing a leg obtained by fusing together other previously fused legs, unfuses only the last fusion.
axes=2 unfuse leg 2 into legs 2,3 -> (0,1,2,3,4) __ __ 0--| |--3 0--| |--3=(3,4) 1--|__| => 1--| | | 2--|__|--4<-3 2=(2,3=(3,4))
fuse_legs may involve leg transposition, which is not undone by unfuse_legs.
- Parameters:
axes (int | Sequence[int]) – leg(s) to unfuse.
- Tensor.add_leg(axis=-1, s=-1, t=None, leg=None) yastn.Tensor #
Creates a new tensor with an extra leg that carries the charge (or part of it) of the orignal tensor. This is achieved by the extra leg having a single charge sector of dimension D=1. The total charge of the tensor
yastn.Tensor.n
can be modified this way.Makes a shallow copy of tensor data.
- Parameters:
axis (int) – index of the new leg
s (int) – signature of the new leg, +1 or -1. The default is -1, where the leg charge is equal to the tensor charge for
t=None
.t (int | Sequence[int]) – charge carried by the new leg. The default is
None
, which takes the total charge n of the original tensor resulting in a tensor with n=0.leg (Optional[Leg]) – It is possible to provide a new leg directly. It has to be of dimension one but can contain information about the fusion of other dimension-one legs. If provided, it overrides information in
s
andt
. The default isNone
.
- Tensor.remove_leg(axis=-1) yastn.Tensor #
Removes leg with a single charge sector of dimension one from tensor. The charge carried by that leg (if any) is added to the tensor’s total charge
yastn.Tensor.n
.Makes a shallow copy of tensor data.
- Parameters:
axis (int) – index of the leg to be removed.
- Tensor.drop_leg_history(axes=None) yastn.Tensor #
Drops information about original structure of fused or blocked legs that have been combined into a selected tensor leg(s).
Makes a shallow copy of tensor data.
- Parameters:
axes (int | Sequence[int]) – index of the leg, or a group of legs. The default is
None
, which drops information from all legs.
Conjugation of symmetric tensors#
See examples at Conjugation of symmetric tensors.
- Tensor.conj() yastn.Tensor #
Return conjugated tensor. In particular, change the sign of the signature s to -s, the total charge n to -n, and complex conjugate each block of the tensor.
Follows the behavior of the
backend.conj()
when it comes to creating a new copy of the data.
- Tensor.conj_blocks() yastn.Tensor #
Complex-conjugate all blocks leaving symmetry structure (signature, blocks charge, and total charge) unchanged.
Follows the behavior of the
backend.conj()
when it comes to creating a new copy of the data.
- Tensor.flip_signature() yastn.Tensor #
Change the signature of the tensor, s to -s or equivalently reverse the direction of in- and out-going legs, and also the total charge of the tensor n to -n. Does not complex-conjugate the elements of the tensor.
Creates a shallow copy of the data.
- Tensor.flip_charges(axes=None) yastn.Tensor #
Flip signs of charges and signatures on specified legs.
Flipping charges/signature of hard-fused legs is not supported.
- Parameters:
axes (int | Sequence[int]) – index of the leg, or a group of legs. The default is
None
, which flips all legs.
Tensor norms#
- yastn.linalg.norm(a, p='fro') Number #
Norm of the tensor.
- Parameters:
p (str) –
'fro'
for Frobenius norm;'inf'
for \(l^\infty\) (or supremum) norm.
Spectral decompositions and truncation#
See examples at Decompositions of symmetric tensors.
- yastn.linalg.svd(a, axes=(0, 1), sU=1, nU=True, compute_uv=True, Uaxis=-1, Vaxis=0, policy='fullrank', fix_signs=False, svd_on_cpu=False, **kwargs) tuple[yastn.Tensor, yastn.Tensor, yastn.Tensor] | yastn.Tensor #
Split tensor into \(a = U S V\) using exact singular value decomposition (SVD), where the columns of U and the rows of V form orthonormal bases and S is a positive and diagonal matrix.
- Parameters:
axes (tuple[int, int] | tuple[Sequence[int], Sequence[int]]) – Specify two groups of legs between which to perform SVD, as well as their final order.
sU (int) – Signature of the new leg in U; equal to 1 or -1. The default is 1. V is going to have the opposite signature on the connecting leg.
nU (bool) – Whether or not to attach the charge of
a
to U. IfFalse
, it is attached to V. The default isTrue
.compute_uv (bool) – If
True
, compute and return U, S, V. IfFalse
, compute and return onlys S. The default isTrue
.Uaxis, Vaxis (int) – Specify which leg of U and V tensors are connecting with S. By default, it is the last leg of U and the first of V, in which case
a = U @ S @ V
.policy (str) –
"fullrank"
or"lowrank"
are allowed. Use standard full (but reduced) SVD for"fullrank"
. For"lowrank"
, uses randomized/truncated SVD and requires providingD_block
inkwargs
. This employsscipy.sparse.linalg.svds
for numpy backend; andtorch.svd_lowrank
for torch backend. kwargs will be passed to those functions for non-default settings.fix_signs (bool) – Whether or not to fix phases in U and V, so that the largest element in each column of U is positive. Provide uniqueness of decomposition for non-degenerate cases. The default is
False
.svd_on_cpu (bool) – GPU tends to be very slow when executing SVD. If
True
, the data will be copied to CPU for SVD, and the results will be copied back to the device. Nothing is done for data already residing on CPU. The default isFalse
.
- Return type:
U, S, V (when
compute_uv=True
) or S (whencompute_uv=False
)
- yastn.linalg.svd_with_truncation(a, axes=(0, 1), sU=1, nU=True, Uaxis=-1, Vaxis=0, policy='fullrank', fix_signs=False, svd_on_cpu=False, tol=0, tol_block=0, D_block=inf, D_total=inf, truncate_multiplets=False, mask_f=None, **kwargs) tuple[yastn.Tensor, yastn.Tensor, yastn.Tensor] #
Split tensor using exact singular value decomposition (SVD) into \(a = U S V\), where the columns of U and the rows of V form orthonormal bases and S is positive and diagonal matrix.
The function allows for optional truncation. Truncation can be based on relative tolerance, bond dimension of each block, and total bond dimension across all blocks (whichever gives smaller total dimension).
- Parameters:
axes (tuple[int, int] | tuple[Sequence[int], Sequence[int]]) – Specify two groups of legs between which to perform SVD, as well as their final order.
sU (int) – signature of the new leg in U; equal 1 or -1. The default is 1. V is going to have opposite signature on connecting leg.
nU (bool) – Whether or not to attach the charge of
a
to U. IfFalse
, it is attached to V. The default isTrue
.Uaxis, Vaxis (int) – specify which leg of U and V tensors are connecting with S. By default, it is the last leg of U and the first of V.
policy (str) –
"fullrank"
or"lowrank"
are allowed. For"fullrank"
use standard full (but reduced) SVD, and for"lowrank"
use randomized/truncated SVD and requires providingD_block
inkwargs
.tol (float) – relative tolerance of singular values below which to truncate across all blocks.
tol_block (float) – relative tolerance of singular values below which to truncate within individual blocks.
D_block (int) – largest number of singular values to keep in a single block.
D_total (int) – largest total number of singular values to keep.
truncate_multiplets (bool) – If
True
, enlarge the truncation range specified by other arguments by shifting the cut to the largest gap between to-be-truncated singular values across all blocks. It provides a heuristic mechanism to avoid truncating part of a multiplet. The default isFalse
.mask_f (function[yastn.Tensor] -> yastn.Tensor) – custom truncation-mask function. If provided, it overrides all other truncation-related arguments.
- Return type:
U, S, V
- yastn.linalg.qr(a, axes=(0, 1), sQ=1, Qaxis=-1, Raxis=0) tuple[yastn.Tensor, yastn.Tensor] #
Split tensor using reduced QR decomposition, such that \(a = Q R\), with \(QQ^\dagger=I\). The charge of R is zero. The charge of
a
is carried by Q.- Parameters:
axes (tuple[int, int] | tuple[Sequence[int], Sequence[int]]) – Specify two groups of legs between which to perform QR, as well as their final order.
sQ (int) – signature of connecting leg in Q; equal 1 or -1. The default is 1. R is going to have opposite signature on connecting leg.
Qaxis, Raxis (int) – specify which leg of Q and R tensors are connecting to the other tensor. By default, it is the last leg of Q and the first leg of R.
- Return type:
Q, R
- yastn.linalg.eigh(a, axes, sU=1, Uaxis=-1) tuple[yastn.Tensor, yastn.Tensor] #
Split symmetric tensor using exact eigenvalue decomposition, \(a= USU^{\dagger}\).
Tensor is expected to be symmetric (hermitian) with total charge 0.
- Parameters:
axes (tuple[int, int] | tuple[Sequence[int], Sequence[int]]) – Specify two groups of legs between which to perform eigh, as well as their final order.
sU (int) – signature of connecting leg in U equall 1 or -1. The default is 1.
Uaxis (int) – specify which leg of U is the new connecting leg. By default, it is the last leg.
- Return type:
S, U
- yastn.linalg.eigh_with_truncation(a, axes, sU=1, Uaxis=-1, tol=0, tol_block=0, D_block=inf, D_total=inf, truncate_multiplets=False) tuple[yastn.Tensor, yastn.Tensor] #
Split symmetric tensor using exact eigenvalue decomposition, \(a= USU^{\dagger}\). Optionally, truncate the resulting decomposition.
Tensor is expected to be symmetric (hermitian) with total charge 0. Truncation can be based on relative tolerance, bond dimension of each block, and total bond dimension across all blocks (whichever gives smaller total dimension). Truncate based on tolerance only if some eigenvalues are positive – than all negative ones are discarded.
- Parameters:
axes (tuple[int, int] | tuple[Sequence[int], Sequence[int]]) – Specify two groups of legs between which to perform eigh, as well as their final order.
sU (int) – signature of connecting leg in U equall 1 or -1. The default is 1.
Uaxis (int) – specify which leg of U is the new connecting leg. By default, it is the last leg.
tol (float) – relative tolerance of eigen-values below which to truncate across all blocks.
tol_block (float) – relative tolerance of eigen-values below which to truncate within individual blocks.
D_block (int) – largest number of eigen-values to keep in a single block.
D_total (int) – largest total number of eigen-values to keep.
- Return type:
S, U
- yastn.linalg.truncation_mask(S, tol=0, tol_block=0, D_block=inf, D_total=inf, truncate_multiplets=False, mask_f=None, **kwargs) yastn.Tensor[bool] #
Generate mask tensor based on diagonal and real tensor
S
. It can be then used for truncation.Per block options
D_block
andtol_block
govern truncation within individual blocks, keeping at mostD_block
values which are larger than relative cutofftol_block
.- Parameters:
S (yastn.Tensor) – Diagonal tensor with spectrum.
tol (float) – relative tolerance.
tol_block (float) – relative tolerance per block.
D_total (int) – maximum number of elements kept across all blocks.
D_block (int) – maximum number of elements kept per block.
truncate_multiplets (bool) – If
True
, enlarge the truncation range specified by other arguments by shifting the cut to the largest gap between to-be-truncated singular values across all blocks. It provides a heuristic mechanism to avoid truncating part of a multiplet. IfTrue
,tol_block
andD_block
are ignored, astruncate_multiplets
is a global condition. The default isFalse
.mask_f (None | function[yastn.Tensor] -> yastn.Tensor) – Custom mask function. The default is None. If provided, it overrides the default function and all other parameters are ignored.
- yastn.linalg.truncation_mask_multiplets(S, tol=0, D_total=inf, eps_multiplet=1e-13, **kwargs) yastn.Tensor[bool] #
Generate a mask tensor from real positive spectrum
S
, while preserving degenerate multiplets. This is achieved by truncating the spectrum at the boundary between multiplets.- Parameters:
S (yastn.Tensor) – Diagonal tensor with spectrum.
tol (float) – relative tolerance.
D_total (int) – maximum number of elements kept in the result.
eps_multiplet (float) – relative tolerance on multiplet splitting. If relative difference between two consecutive elements of
S
is larger thaneps_multiplet
, these elements are not considered as part of the same multiplet.
- yastn.linalg.entropy(a, alpha=1, tol=1e-12) Number #
Calculate entropy from probabilities encoded in diagonal tensor
a
.Normalizes (sum of)
a
to 1, but do not check correctness otherwise. Use base-2 log. For empty or zero tensor, returns0
.- Parameters:
alpha (float) – Order of Renyi entropy.
alpha=1
(the default) is von Neuman entropy: \(-{\rm Tr}(a \cdot {\rm log2}(a))\) otherwise: \(\frac{1}{1-alpha} {\rm log2}({\rm Tr}(a^{alpha}))\)tol (float) – Discard all probabilities smaller than
tol
during calculation.
Auxliary#
Methods called by Krylov-based algorithms.
- Tensor.expand_krylov_space(f, tol, ncv, hermitian, V, H=None, **kwargs)#
Expand the Krylov base up to
ncv
states or until reaching desired tolerancetol
. Implementation foryastn.Tensor
.
- Tensor.linear_combination(*, amplitudes=None, **kwargs)#
Linear combination of tensors with given amplitudes, \(\sum_i amplitudes[i] tensors[i]\).
- Parameters:
tensors (Sequence[yastn.Tensor]) – Signatures and total charges of all tensors should match.
amplitudes (None | Sequence[Number]) – If
None
, all amplitudes are assumed to be one. Otherwise, The number of tensors and amplitudes should be the same. Individual amplitude can beNone
, which gives the same result as1
but without an extra multiplication.