17. Time Evolution#
In this segment of the tutorial, we delve into some time evolution techniques for MPS. In particular we will focus on the TDVP and Time Evolution MPO methods. Another method (i)TEBD has already been explained in an earlier section. Following this we briefly explain how imaginary time evolution can be used to find ground states and how thermal density matrices can be simulated using MPS. At the end of this section we offer some basic code examples.
In the case of quantum many body systems time evolution amounts to solving the time dependent Schrodinger equation
for a given Hamiltonian \(\hat{H}\) with initial condition \(\ket{\Psi_0}=\ket{\Psi(t_0)}\). For a time independent Hamiltonian the solution is given by
By approximating the time evolution operator \(U\) in the Tensor Network language, we can also study real-time dynamics.
Note
One should keep in mind that time evolution in general will increase the entanglement of the state so that in practice time evolution can only be done with Tensor Networks for relatively modest times. For example in case of a quench the entanglement for 1D systems grows as \(S \sim t\) , (see [Calabrese and Cardy, 2005]) so that the bond dimension \(D \sim \exp(at)\) in order to accurately follow the dynamics.
Contents
17.1. TDVP#
The Time-Dependent Variational Principle is an old concept, originaly developed by Dirac and Frenkel in the 1930’s. The idea is to solve the schrodinger equation by minimizing
In the case of MPS, we can parametrize the state \(\ket{\Psi(t)}\) by a set of time dependent matrices \(\{A_1(t),A_2(t),\dots A_N(t)\}\) (where N is the system size for finiteMPS or the size of the unit cell for infinite MPS). In other words the state \(\ket{\Psi(t)}\) lives in a manifold determined by these matrices, the MPS-manifold. Geometrically the solution of the minimization problem is given by the projection of the RHS of the schrodinger equation onto the MPS manifold
where \(\hat{P}_{T\ket{\Psi(A)}}\) is the operator that projects the state onto the tangent space. As a consequence the time-evolving state will never leave the MPS manifold and parametrization in terms of \(A(t)\) makes sense. One can in principle work out the above equation on the level of the \(A\) matrices and try to solve the above equation. This gives a complicated set of (non-linear) equations that can be solved by one’s favourite finite difference scheme, but requires the inversion of matrices with small singular values (and thus numerical instabilities) [Haegeman et al., 2011], [Haegeman et al., 2016]. Instead, it turns out that a natural and inversion free way of solving this equation is possible if we use the gauge freedom of MPS.
For a finite MPS, one can show that in the mixed gauge the action of the projection operator onto \(\hat{H}\ket{\Psi(A)}\) is given by [Vanderstraeten et al., 2019]
The projector action consists of two sums, one where an effective Hamiltonian \(\hat{H}_{\text{eff}}^{A_C}\) acts on the \(A_C\) on site \(n\) and one where \((\hat{H}_{\text{eff}}^{C})\) acts on the bond tensor \(C\) to the right of it. The effective Hamiltonias are given by
Thanks to the decomposition of \(\hat{P}_{T\ket{\Psi(A)}} \hat{H}\ket{\Psi(A)}\) (17.2) now resembles an ODE of the form
This type of ODE can be solved by a splitting method [Lubich et al., 2015] i.e. we solve \(\frac{d}{dt} Y = A(Y)\) and \(\frac{d}{dt} Y = B(Y)\) seperately and then combine the two results to obtain an approximate solution to (17.3). Applying this idea to (17.2) we thus need to solve equations of the form
and
These can be further simplified by noting that we can put all the time dependence inside one tensor, which we choose to be either \(A_C(n)\) or \(C(n)\). It is then sufficient to solve
and
for each site \(n\) seperately. These can be integrated exactly to give
and
A natural way to combine the seperate solutions is to perform a sweep-like update. Starting from the first site we do:
TDVP algorithm for finite MPS
At the end of the chain one only updates the \(A_C\) since there is no \(C\) there.
Doing the above left to right sweep gives a first order integrator i.e. we have solved the time evolution up to order \(\mathcal{O}(dt^2)\). Since the terms can be solved in any order we can also perform a reverse sweep i.e. working from right to left. Combining this with the left to right sweep yields a second order integrator (because the reverse sweep is the adjoint of the forwards sweep).
For an infinite MPS one could also do a sweep-like update until some criteria converges to obtain new tensors \(\{A_L,C,A_C,A_R\}\). However this can be costly since one has to iterate until convergence. Instead we can exploit the translational invariance of the system by demanding that \(C=\tilde{C}\). Since \(\tilde{C}=\exp(idt \hat{H}_{\text{eff}}^{C}) C(n,t+dt)\) we can turn things around and find
Given the newly found \(C(n,t+dt)\) and \(A_C(n,t+dt)\) one can determine a new \(A_L\), giving a MPS for \(t+dt\).
Note
Unlike other time evolution methods, TDVP retains some of the physical properties of the Schrodinger equation it is trying to solve. First of all, it acts trivial on (numerical) eigenstates of \(\hat{H}\) since then \(\hat{H}_{\text{eff}}^{A_C}[A_C] \propto A_C\) and the whole MPS picks up a phase equal to \(e^{-i dtE}\). In addition it conserves energy and is time-reversible for time-independent Hamiltonians [Vanderstraeten et al., 2019].
17.2. Time Evolution MPO#
Perhaps the most natural way to perform the time evolution would be to write the time evolution operator as a MPO. The evolved state would then simply be the contraction of this MPO onto an MPS (see ref). The exponential implementing the time evolution can be approximated up to any order by its trunctated Taylor series
The MPO approximation of the time evolution operator then boils down to implementing powers of \(\hat{H}\) in an efficient (i.e. with the lowest possible MPO bond dimension) and size-extensive way [Damme et al., 2023]. For example, for a MPO Hamiltonian of the form
which corresponds to the Hamiltonian
The first order approximation of \(U=\exp(-\tau\hat{H})\) is given in MPO form by
Doing the matrix multiplcation (and remembering that for MPO the boundary conditions are so the we need to track the upper left expression) we find
as desired.
The trick for generating the first order approximation involves removing the third “level” from the MPO form of H and multiplying with the appropriate factor of τ. This can be visualised as follows
This method can be extended to any desired order in \(\tau\) as outlined in [Damme et al., 2023].
17.3. Imaginary time evolution#
Besides simulating dynamics, any time evolution method can also be used to find the groundstate of \(\hat{H}\) by taking \(t\) to be imaginary. The basis for this idea is the fact that
where \(\ket{\Psi}\) is any initial state not orthogonal to the ground state. Indeed expanding the initial state in the eigenbasis of \(\hat{H}\) we have \(\ket{\Psi} = \sum_i c_i \ket{E_i}\) with \(\ket{E_i}\) the eigenstate corresponding to energy \(E_i\) with ordering \(E_0 < E_1 < E_2 < \dots\). Then
In taking the limit \(\tau\to+\infty\) the slowest vanishing exponential is that of \(E_0\). In this way the ground state gets projected out of the initial state. Demanding that the state is normalized gives
which gives the ground state up to an irrelevant phase factor.
17.4. Finite Temperature#
It is possible to use time evolution methods to construct thermal density operators i.e. \(\rho = \frac{1}{Z}e^{-\beta \hat{H}}\) with \(\beta=1/T\) and \(Z\) a normalization constant. The idea here is to write \(\rho\) as an MPO
with the constraint that
Here the triangles represent an isometry that fuses the two legs together into a bigger leg. This particular form ensures that \(\rho\) is a positive semi-definite operator and thus physical [Verstraete et al., 2004]. Note that for \(d_k=1\) we obtain the density matrix of a pure state. We can represent \(\rho\) as the density matrix of pure state (i.e. a MPS) by introducing ancillas \(\{\ket{a_k}\}\) so that
where the thicker physical legs indicate that they contain both the \(s\) and \(a\) degrees of freedom. One immediately sees that \(\rho=\text{Tr}_a({\ket{\Psi}\bra{\Psi}})\). The thermal density operators \(\rho(\beta)\) for any \(\beta\) can then be found by starting from the \(\beta=0\) state \(\rho(0)=\mathbf{1}\) and performing imaginary time evolution
with \(\Delta \tau = \frac{\beta}{2M}\) [Verstraete et al., 2004].
17.5. Code example: MPSKit.timestep,make_time_mpo
#
Below is some code on how MPSKit and MPSKitModels can be used out-of-the-box to perform time evolution.
using TensorKit,MPSKit,MPSKitModels
using Plots
H₀ = transverse_field_ising(;J=1.0,g=0.0);
#Create a random MPS with physical bond dimension d=2 and virtual D=10 and optimize it
Ψ = InfiniteMPS([2],[10]);
(gs,envs) = find_groundstate(Ψ,H₀,VUMPS(;verbose=false));
#Let's check some expectation values
sz_gs = expectation_value(gs,σᶻ()) # we have found the |↑↑...↑> or |↓↓...↓> state
E_gs = expectation_value(gs,H₀,envs)
# time evolution Hamiltonian
Ht = transverse_field_ising(;J=1.0,g=0.25);
Ebefore = real(expectation_value(gs,Ht)[1])
dt = 0.1
# Let's do one time step with TDVP
alg = TDVP();
envs = environments(gs,Ht);
(Ψt,envs) = timestep(gs,Ht,dt,alg,envs);
szt_tdvp = real(expectation_value(Ψt,σᶻ())[1]);
Et_tdvp = real(expectation_value(Ψt,Ht,envs)[1]);
# let's make a first order time evolution mpo out of Ht
Ht_mpo = make_time_mpo(Ht, dt, TaylorCluster{1}());
(Ψt,_) = approximate(gs, (Ht_mpo, gs), VUMPS(; verbose=false));
szt_tmpo = real(expectation_value(Ψt,σᶻ())[1]);
Et_tmpo = real(expectation_value(Ψt,Ht)[1]);
@show szt_tdvp-szt_tmpo
@show Ebefore-Et_tmpo
@show Ebefore-Et_tdvp;
[ Info: calcrw failed to converge 1.2699258727778001e-12
szt_tdvp - szt_tmpo = -2.0274126572794593e-5
Ebefore - Et_tmpo = 1.2840946585157553e-5
Ebefore - Et_tdvp = 1.1929500494112233e-6
We see that \(<σᶻ(t)>\) for both methods after one timestep are reasonably close, but that the energy (density) is (more) conserved for the tdvp method. If we were to do the time evolution for many timesteps and for different orders of time evolution MPO we would end up with the following plot
We clearly see that increasing the order of the time evolution mpo improves the result.
# We can also find the groundstate using imaginary time evolution
H = transverse_field_ising(;J=1.0,g=0.35);
# Here we will do 1 iteration and see that the energy has dropped
Ψ = InfiniteMPS([2],[10]);
Ψenv = environments(Ψ,H) ;
Ebefore = real(expectation_value(Ψ,H,envs)[1]);
(Ψ,Ψenv) = timestep(Ψ,H,-1im*dt,TDVP(),Ψenv);
Eafter = real(expectation_value(Ψ,H,Ψenv)[1]);
Eafter < Ebefore
true
If we were to do this for many iterations the energy of the evolved state eventually reach that of the ground state as shown below.