Chapter 5 Tensor Network Contraction and Multi Linear Algebra

Chapter 5 Tensor Network Contraction And Multi Linear Algebra-Free PDF

  • Date:26 Sep 2020
  • Views:2
  • Downloads:0
  • Pages:31
  • Size:904.55 KB

Share Pdf : Chapter 5 Tensor Network Contraction And Multi Linear Algebra

Download and Preview : Chapter 5 Tensor Network Contraction And Multi Linear Algebra


Report CopyRight/DMCA Form For : Chapter 5 Tensor Network Contraction And Multi Linear Algebra


Transcription:

100 5 Tensor Network Contraction and Multi Linear Algebra. about using the MLA to understand and develop TN algorithms MLA is also known. as tensor decompositions or tensor algebra 1 It is a highly inter disciplinary. subject One of its tasks is to generalize the techniques in the linear algebra to. higher order tensors For instance one key question is how to define the rank of. a tensor and how to determine its optimal lower rank approximation This is exactly. what we need in the TN algorithms, Let us begin with a trivial example by simply considering the trace of the product. of N number of matrices M as,TrM Tr M 1 M 2 M N Tr M n 5 1. with M n M In the language of TN this can be regarded as a 1D TN with. periodic boundary condition For simplicity we assume that the dominant eigenstate. of M is unique, Allow us to firstly use a clumsy way to do the calculation contract the shared. bonds one by one from left to right For each contraction the computational cost is. O 3 thus the total cost is O N 3, Now let us be smarter by using the eigenvalue decomposition assume it exists. for M in the linear algebra which reads, where are diagonal and U is unitary satisfying U U U U I Substituting.
Eq 5 2 into 5 1 we can readily have the contraction as. TrM Tr U U U U U U Tr U U,The dominant computational cost is around O 3. In the limit of N things become even easier where we have. where 0 is the largest eigenvalue and we have limN 0 0 for a 0 It. means all the contributions except for the dominant eigenvalue vanish when the TN. is infinitely long What we should do is just to compute the dominant eigenvalue. The efficiency can be further improved by numerous more mature techniques such. as Lanczos algorithm, 5 1 A Simple Example of Solving Tensor Network Contraction by Eigenvalue 101. 5 1 1 Canonicalization of Matrix Product State, Before considering a 2D TN let us take some more advantages of the eigenvalue. decomposition on the 1D TN s which is closely related to the canonicalization. of MPS proposed by Or s and Vidal for non unitary evolution of MPS 2 The. utilization of canonicalization is mainly in two aspects locating optimal truncations. of the MPS and fixing the gauge degrees of freedom of the MPS for better stability. and efficiency, 5 1 2 Canonical Form and Globally Optimal Truncations of. As discussed in the above chapter when using iTEBD to contract a TN one needs. to find the optimal truncations of the virtual bonds of the MPS In other words the. problem is how to optimally reduce the dimension of an MPS. The globally optimal truncation can be down in the following expensive way. Let us divide the MPS into two parts by cutting the bond that is to be truncated. Fig 5 1 Then if we contract all the virtual bonds on the left hand side and reshape. all the physical indexes there into one index we will obtain a large matrix denoted. as L sn n that has one big physical and one virtual index Another matrix denoted. as Rs n 1 n can be obtained by doing the same thing on the right hand side The. conjugate of R is taken there to obey some conventions. Then by contracting the virtual bond and doing SVD as. L sn an Rs n 1 an L sn an an R s n 1 a 5 5, the virtual bond dimension is optimally reduced to by only taking the.
largest singular values and the corresponding vectors The truncation error that. is minimized is the distance between the MPS before and after the truncation. Therefore the truncation is optimal globally concerning the whole MPS as the. environment, Fig 5 1 An impractical scheme to get the global optimal truncation of the virtual bond red. First the MPS is cut into two parts All the indexes on each side of the cut are grouped into one. big index Then by contracting the virtual bond and doing the SVD the virtual bond dimension is. optimally reduced to by only taking the largest singular values and the corresponding vectors. 102 5 Tensor Network Contraction and Multi Linear Algebra. Fig 5 2 The MPS with two site translational invariance. In practice we do not implement the SVD above It is actually the decomposition. of the whole wave function which is exponentially expensive Canonicalization. provides an efficient way to realize the SVD through only local operations. Considering an infinite MPS with two site translational invariance Fig 5 2 it is. formed by the tensors A and B as well as the diagonal matrices and as. an 1 Asn 1 an 1 an an Bsn an an 1 an 1 tTr A B 5 6. This is the MPS used in the iTEBD algorithm see Sect 3 4 and Fig 3 5 Note that. all argument can be readily generalized to the infinite MPSs with n site translational. invariance or even to the finite MPSs, An MPS is in the canonical form if the tensors satisfy. a As aa a A s aa Ia a 5 7,As a a a A s a a a Ia a 5 8. a Bs aa a Bs aa,Bs a a a Bs a a a Ia a 5 10, where and are positive defined Fig 5 3 Equations 5 7 5 10 are called. the canonical conditions of the MPS Note there will be 2n equations with n site. translational invariance meaning that each inequivalent tensor will obey to two left. and right conditions, In the canonical form or directly give the singular values by cutting.
the MPS on the corresponding bond To see this let us calculate Eq 5 5 from. a canonical MPS From the canonical conditions matrices L and R are unitary. satisfying L L I and R R I the physical indexes are contracted Meanwhile. or is positive defined thus L or and R of a canonical MPS directly. define the SVD and or is indeed the singular value spectrum Then the optimal. truncations of the virtual bonds are reached by simply keeping largest values of. and the corresponding basis of the neighboring tensors This is true when cutting. any one of the bonds of the MPS From the uniqueness of SVD Eqs 5 7 and 5 8. 5 1 A Simple Example of Solving Tensor Network Contraction by Eigenvalue 103. Fig 5 3 Four canonical conditions of an MPS, leads to a unique MPS representation thus such a form is called canonical In. other words the canonicalization fixes the gauge degrees of freedom of the MPS. For any finite MPS the uniqueness is robust For an infinite MPS there will be. some additional complexity Let us define the left and right transfer matrices M L. and M R of as,MaL1 a a2 a a1 As a1 a2 a A s a a 5 11. MaR1 a a2 a As a1 a2 a1 A s a a a 5 12, Then the canonical conditions Eq 5 7 say that the identity is the left right. eigenvector of M L M R satisfying,Ia1 a1 MaL1 a a2 a L Ia2 a2 5 13. Ia2 a2 MaR1 a a2 a R Ia1 a1 5 14,with L R the eigenvalue.
Similar eigenvalue equations can be obtained from the canonical conditions. associated to the tensor B where we have the transfer matrices as. NaL1 a a2 a a1 Bs a1 a2 a Bs a,NaR1 a a2 a Bs a1 a2 a1 Bs a a a 5 16. 104 5 Tensor Network Contraction and Multi Linear Algebra. Now the canonical conditions are given by four eigenvalue equations and can be. reinterpreted as the following with an infinite MPS formed by A B and it is. canonical when the identity is the eigenvector of its transfer matrices. Simply from the canonical conditions it does not require the identity to be. dominant eigenvector However if the identity is not the dominant one the canonical. conditions will become unstable under an arbitrarily small noise Below we will. show that the canonicalization algorithm assures that the identity is the leading. eigenvector since it transforms the leading eigenvector to an identity In addition. if the dominant eigenvector of M L and M R also N L and N R is degenerate the. canonical form will not be unique See Ref 2 for more details. 5 1 3 Canonicalization Algorithm and Some Related Topics. Considering the iTEBD algorithm 3 see Sect 3 4 while the MPO represents a. unitary operator the canonical form of the MPS will be reserved by the evolution. contraction For the imaginary time evolution the MPO is near unitary For the. Trotter step 0 the MPO approaches to be an identity It turns out that in. this case the MPS will be canonicalized by the evolution in the standard iTEBD. algorithm When the MPO is non unitary e g when contracting the TN of a 2D. statistic model 2 the MPS will not be canonical and the canonicalization might. be needed to better truncate the bond dimensions of the MPS. Canonicalization Algorithm An algorithm to canonicalize an arbitrary MPS was. proposed by Or s and Vidal 2 The idea is to compute the first eigenvectors of the. transfer matrices and introduce proper gauge transformations on the virtual bonds. that map the leading eigenvector to identity, Let us take the gauge transformations on the virtual bonds between A and B as. an example Firstly compute the dominant left eigenvector v L of the matrix N L M L. and similarly the dominant right eigenvector v R of the matrix N R M R Then reshape. v L and v R as two matrices and decompose them symmetrically as. vaR1 a Xa1 a1 Xa a 5 17,vaL1 a Ya1 a1 Ya a 5 18, X and Y can be calculated using eigenvalue decomposition i e v R W DW with. Insert the identities X 1 X and Y Y 1 on the virtual bond as shown in Fig 5 4. then we get a new matrix M X Y on this bond Apply SVD on M as M. U V where we have the updated spectrum on this bond Meanwhile we obtain. 5 1 A Simple Example of Solving Tensor Network Contraction by Eigenvalue 105. Fig 5 4 The illustration of the canonical transformations. the gauge transformations to update A and B as U X 1 U and V V Y 1. where the transformations are implemented as,As1 a1 a2 As1 a1 a Uaa2 5 19. Bs1 a1 a2 Bs1 aa2 Va1 a 5 20, Implement the same steps given above on the virtual bonds between B and A then.
the MPS is transformed to the canonical form, Variants of the Canonical Form From the canonical form of an MPS one can define. the left or right canonical forms Define the follow tensors. s aa a As aa,s aa As aa a,Bs aa 5 23,Bs aa 5 24,s aa a As aa a. The left canonical MPS is defined by AL and B L as. tTr AL B L AL B L 5 26, Similarly the right canonical MPS is defined by AR and B R as. tTr AR B R AR B R 5 27,The central orthogonal MPS is defined as. tTr AL B L AM B R AR 5 28, 106 5 Tensor Network Contraction and Multi Linear Algebra.
One can easily check that these MPSs and the canonical MPS can be transformed to. each other by gauge transformations,From the canonical. conditions AL AR B L and B R are non square orthogonal. s aa Ia a called isometries A is called the central. matrices e g sa As aa AL, tensor of the central orthogonal MPS This MPS form is the state ansatz behind the. DMRG algorithm 4 5 and is very useful in TN based methods see for example. the works of McCulloch 6 7 For instance when applying DMRG to solve 1D. quantum model the tensors AL and B L define a left to right RG flow that optimally. compresses the Hilbert space of the left part of the chain AR and B R define a right. to left RG flow similarly The central tensor between these two RG flows is in fact. the ground state of the effective Hamiltonian given by the RG flows of DMRG. Note that the canonical MPS is also called the central canonical form where the. directions of the RG flows can be switched arbitrarily by gauge transformations. thus there is no need to define the directions of the flows or a specific center. Relations to Tensor Train Decomposition It is worth mentioning the TTD 8. proposed in the field of MLA As argued in Chap 2 one advantage of MPS is. it lowers the number of parameters from an exponential size dependence to a. polynomial one Let us consider a similar problem for a N th order tensor that has. d N parameters how to find its optimal MPS representation where there are only. 2d N 2 d 2 parameters TTD was proposed for this aim by decomposing. a tensor into a tensor train form that is similar to a finite open MPS the number. of parameters becomes linearly relying to the order of the original tensor The. TTD algorithm shares many similar ideas with MPS and the related algorithms. especially DMRG which was proposed about two decades earlier The aim of TTD. is also similar to the truncation tasks in the TN algorithms which is to compress the. number of parameters, 5 2 Super Orthogonalization and Tucker Decomposition. As discussed in the above section the canonical form of an MPS brings a lot. of advantages such as determining the entanglement and the optimal truncations. of the virtual bond dimensions by local transformations The canonical form can. be readily generalized to the iPEPSs on trees Can we also define the canonical. form for the iPEPSs in higher dimensional regular lattices such as square lattice. Fig 5 5 If this can be done we would know how to find the globally optimal. transformations that reduces the bond dimensions of the iPEPS just like what we. can do with an MPS Due to the complexity of 2d TN s unfortunately there is. no such a form in general In the following we explain the super orthogonal form. of iPEPS proposed in 2012 9 which applies the canonical form of tree iPEPS. to the iPEPS on regular lattices The super orthogonalization is a generalization. of the Tucker decomposition a higher order generalization of matrix SVD 10. providing a zero loop approximation scheme 11 to define the entanglement and. truncate the bond dimensions, 5 2 Super Orthogonalization and Tucker Decomposition 107. Fig 5 5 The first two figures show the iPEPS on tree and square lattices with two site transla. tional invariance The last one shows the super orthogonal conditions. Tensor Network Contraction and Multi Linear Algebra Abstract This chapter is aimed at understanding TN algorithms from the per spective of MLA In Sect 5 1 we start from a simple example with a 1D TN stripe which can be contracted by solving the eigenvalue decomposition of matrices This relates to several important MPS techniques such as canonicaliza tion Or s and Vidal Phys Rev B

Related Books