17. Time Evolution#

In this segment of the tutorial, we delve into some time evolution techniques for MPS. In particular we will focus on the TDVP and Time Evolution MPO methods. Another method (i)TEBD has already been explained in an earlier section. Following this we briefly explain how imaginary time evolution can be used to find ground states and how thermal density matrices can be simulated using MPS. At the end of this section we offer some basic code examples.

In the case of quantum many body systems time evolution amounts to solving the time dependent Schrodinger equation

(17.1)#\[i \frac{\partial}{\partial t} \ket{\Psi(t)}= \hat{H}\ket{\Psi(t)}\]

for a given Hamiltonian \(\hat{H}\) with initial condition \(\ket{\Psi_0}=\ket{\Psi(t_0)}\). For a time independent Hamiltonian the solution is given by

\[\ket{\Psi(t)} = U(t)\ket{\Psi_0} = \exp(-it\hat{H})\ket{\Psi_0}\]

By approximating the time evolution operator \(U\) in the Tensor Network language, we can also study real-time dynamics.

Note

One should keep in mind that time evolution in general will increase the entanglement of the state so that in practice time evolution can only be done with Tensor Networks for relatively modest times. For example in case of a quench the entanglement for 1D systems grows as \(S \sim t\) , (see [Calabrese and Cardy, 2005]) so that the bond dimension \(D \sim \exp(at)\) in order to accurately follow the dynamics.

17.1. TDVP#

The Time-Dependent Variational Principle is an old concept, originaly developed by Dirac and Frenkel in the 1930’s. The idea is to solve the schrodinger equation by minimizing

\[\|i \frac{\partial}{\partial t} \ket{\Psi(t)}- \hat{H}\ket{\Psi(t)}\|^2\]

In the case of MPS, we can parametrize the state \(\ket{\Psi(t)}\) by a set of time dependent matrices \(\{A_1(t),A_2(t),\dots A_N(t)\}\) (where N is the system size for finiteMPS or the size of the unit cell for infinite MPS). In other words the state \(\ket{\Psi(t)}\) lives in a manifold determined by these matrices, the MPS-manifold. Geometrically the solution of the minimization problem is given by the projection of the RHS of the schrodinger equation onto the MPS manifold

(17.2)#\[\frac{d}{dt} \ket{\Psi(A)} = -i \hat{P}_{T\ket{\Psi(A)}} \hat{H}\ket{\Psi(A)}\]

where \(\hat{P}_{T\ket{\Psi(A)}}\) is the operator that projects the state onto the tangent space. As a consequence the time-evolving state will never leave the MPS manifold and parametrization in terms of \(A(t)\) makes sense. One can in principle work out the above equation on the level of the \(A\) matrices and try to solve the above equation. This gives a complicated set of (non-linear) equations that can be solved by one’s favourite finite difference scheme, but requires the inversion of matrices with small singular values (and thus numerical instabilities) [Haegeman et al., 2011], [Haegeman et al., 2016]. Instead, it turns out that a natural and inversion free way of solving this equation is possible if we use the gauge freedom of MPS.

For a finite MPS, one can show that in the mixed gauge the action of the projection operator onto \(\hat{H}\ket{\Psi(A)}\) is given by [Vanderstraeten et al., 2019]

../_images/TDVPProjector.svg

The projector action consists of two sums, one where an effective Hamiltonian \(\hat{H}_{\text{eff}}^{A_C}\) acts on the \(A_C\) on site \(n\) and one where \((\hat{H}_{\text{eff}}^{C})\) acts on the bond tensor \(C\) to the right of it. The effective Hamiltonias are given by

../_images/Heffs.svg

Thanks to the decomposition of \(\hat{P}_{T\ket{\Psi(A)}} \hat{H}\ket{\Psi(A)}\) (17.2) now resembles an ODE of the form

(17.3)#\[\frac{d}{dt} Y = A(Y) + B(Y)\]

This type of ODE can be solved by a splitting method [Lubich et al., 2015] i.e. we solve \(\frac{d}{dt} Y = A(Y)\) and \(\frac{d}{dt} Y = B(Y)\) seperately and then combine the two results to obtain an approximate solution to (17.3). Applying this idea to (17.2) we thus need to solve equations of the form

\[\frac{d}{dt} \ket{\Psi(A)} = -i \hat{H}_{\text{eff}}^{A_C}[ A_C(n)]\]

and

\[\frac{d}{dt} \ket{\Psi(A)}= i \hat{H}_{\text{eff}}^{C}[ C(n)]\]

These can be further simplified by noting that we can put all the time dependence inside one tensor, which we choose to be either \(A_C(n)\) or \(C(n)\). It is then sufficient to solve

(17.4)#\[\dot{A}_C(n) = -i \hat{H}_{\text{eff}}^{A_C}[ A_C(n)]\]

and

(17.5)#\[\dot{C}(n) = i \hat{H}_{\text{eff}}^{C}[ C(n)]\]

for each site \(n\) seperately. These can be integrated exactly to give

(17.6)#\[A_C(n,t+dt) = \exp(-idt \hat{H}_{\text{eff}}^{A_C}) A_C(n,t)\]

and

(17.7)#\[C(n,t+dt) = \exp(idt \hat{H}_{\text{eff}}^{C}) C(n,t)\]

A natural way to combine the seperate solutions is to perform a sweep-like update. Starting from the first site we do:

TDVP algorithm for finite MPS

  1. Update \(A_C(n)\) according to (17.6).

  2. QR the resulting new \(A_C(n,t+dt)\) to get a new updated \(A_L(n,t+dt)\) en \(C(n,t+dt)\).

  3. Update the new \(C(n,t+dt)\) via (17.7) to get an \(\tilde{C}\).

  4. Absorb \(\tilde{C}\) into \(A_R(n+1,t)\) to get a new \(A_C(n+1,t)\)

  5. Repeat for n+1

At the end of the chain one only updates the \(A_C\) since there is no \(C\) there.

Doing the above left to right sweep gives a first order integrator i.e. we have solved the time evolution up to order \(\mathcal{O}(dt^2)\). Since the terms can be solved in any order we can also perform a reverse sweep i.e. working from right to left. Combining this with the left to right sweep yields a second order integrator (because the reverse sweep is the adjoint of the forwards sweep).

For an infinite MPS one could also do a sweep-like update until some criteria converges to obtain new tensors \(\{A_L,C,A_C,A_R\}\). However this can be costly since one has to iterate until convergence. Instead we can exploit the translational invariance of the system by demanding that \(C=\tilde{C}\). Since \(\tilde{C}=\exp(idt \hat{H}_{\text{eff}}^{C}) C(n,t+dt)\) we can turn things around and find

\[C(n,t+dt)=\exp(-idt \hat{H}_{\text{eff}}^{C}) C(n)\]

Given the newly found \(C(n,t+dt)\) and \(A_C(n,t+dt)\) one can determine a new \(A_L\), giving a MPS for \(t+dt\).

Note

Unlike other time evolution methods, TDVP retains some of the physical properties of the Schrodinger equation it is trying to solve. First of all, it acts trivial on (numerical) eigenstates of \(\hat{H}\) since then \(\hat{H}_{\text{eff}}^{A_C}[A_C] \propto A_C\) and the whole MPS picks up a phase equal to \(e^{-i dtE}\). In addition it conserves energy and is time-reversible for time-independent Hamiltonians [Vanderstraeten et al., 2019].

17.2. Time Evolution MPO#

Perhaps the most natural way to perform the time evolution would be to write the time evolution operator as a MPO. The evolved state would then simply be the contraction of this MPO onto an MPS (see ref). The exponential implementing the time evolution can be approximated up to any order by its trunctated Taylor series

\[\exp(-\tau\hat{H}) = \hat{1} + \tau\hat{H} + \frac{\tau^2}{2}\hat{H}^2 + \mathcal{O}(\tau^3)\]

The MPO approximation of the time evolution operator then boils down to implementing powers of \(\hat{H}\) in an efficient (i.e. with the lowest possible MPO bond dimension) and size-extensive way [Damme et al., 2023]. For example, for a MPO Hamiltonian of the form

../_images/MPOHam.svg

which corresponds to the Hamiltonian

\[\hat{H} = \sum_i \hat{D}_i + \hat{C}_i \hat{B}_{i+1}\]

The first order approximation of \(U=\exp(-\tau\hat{H})\) is given in MPO form by

../_images/TimeMPO_1stOrder.svg

Doing the matrix multiplcation (and remembering that for MPO the boundary conditions are so the we need to track the upper left expression) we find

\[\exp(-\tau\hat{H}) \approx \hat{1} + \tau \left(\sum_i D_i + C_i B_{i+1}\right) = \hat{1} + \tau \hat{H}\]

as desired.

The trick for generating the first order approximation involves removing the third “level” from the MPO form of H and multiplying with the appropriate factor of τ. This can be visualised as follows

../_images/FirstOrderTrick.svg

This method can be extended to any desired order in \(\tau\) as outlined in [Damme et al., 2023].

17.3. Imaginary time evolution#

Besides simulating dynamics, any time evolution method can also be used to find the groundstate of \(\hat{H}\) by taking \(t\) to be imaginary. The basis for this idea is the fact that

\[\ket{\Psi_0} = \lim_{\tau->+\infty} \frac{e^{-\tau \hat{H}} \ket{\Psi}}{\sqrt{\braket{\Psi|e^{-2\tau \hat{H}} |\Psi}}}\]

where \(\ket{\Psi}\) is any initial state not orthogonal to the ground state. Indeed expanding the initial state in the eigenbasis of \(\hat{H}\) we have \(\ket{\Psi} = \sum_i c_i \ket{E_i}\) with \(\ket{E_i}\) the eigenstate corresponding to energy \(E_i\) with ordering \(E_0 < E_1 < E_2 < \dots\). Then

\[e^{-\tau \hat{H}} \ket{\Psi} = \sum_i c_i e^{-\tau E_i} \ket{E_i}\]

In taking the limit \(\tau\to+\infty\) the slowest vanishing exponential is that of \(E_0\). In this way the ground state gets projected out of the initial state. Demanding that the state is normalized gives

\[\lim_{\tau->+\infty} \frac{e^{-\tau \hat{H}} \ket{\Psi}}{\sqrt{\braket{\Psi|e^{-2\tau \hat{H}} |\Psi}}} = \lim_{\tau->+\infty} \frac{\sum_i c_i e^{-\tau E_i} \ket{E_i}}{\sqrt{\sum_i |c_i|^2 e^{-2\tau E_i}}} = \frac{c_0 }{|c_0|} \ket{E_0}\]

which gives the ground state up to an irrelevant phase factor.

17.4. Finite Temperature#

It is possible to use time evolution methods to construct thermal density operators i.e. \(\rho = \frac{1}{Z}e^{-\beta \hat{H}}\) with \(\beta=1/T\) and \(Z\) a normalization constant. The idea here is to write \(\rho\) as an MPO

../_images/DensityMatrix.svg

with the constraint that

../_images/Mconstraint.svg

Here the triangles represent an isometry that fuses the two legs together into a bigger leg. This particular form ensures that \(\rho\) is a positive semi-definite operator and thus physical [Verstraete et al., 2004]. Note that for \(d_k=1\) we obtain the density matrix of a pure state. We can represent \(\rho\) as the density matrix of pure state (i.e. a MPS) by introducing ancillas \(\{\ket{a_k}\}\) so that

../_images/AncillaMPS.svg

where the thicker physical legs indicate that they contain both the \(s\) and \(a\) degrees of freedom. One immediately sees that \(\rho=\text{Tr}_a({\ket{\Psi}\bra{\Psi}})\). The thermal density operators \(\rho(\beta)\) for any \(\beta\) can then be found by starting from the \(\beta=0\) state \(\rho(0)=\mathbf{1}\) and performing imaginary time evolution

\[e^{-\beta H} = (e^{-\Delta \tau \hat{H}})^M\rho(0)(e^{-\Delta \tau \hat{H}})^M\]

with \(\Delta \tau = \frac{\beta}{2M}\) [Verstraete et al., 2004].

17.5. Code example: MPSKit.timestep,make_time_mpo#

Below is some code on how MPSKit and MPSKitModels can be used out-of-the-box to perform time evolution.

using TensorKit,MPSKit,MPSKitModels
using Plots
H₀ = transverse_field_ising(;J=1.0,g=0.0);

#Create a random MPS with physical bond dimension d=2 and virtual D=10 and optimize it
Ψ = InfiniteMPS([2],[10]);
(gs,envs)    = find_groundstate(Ψ,H₀,VUMPS(;verbose=false));

#Let's check some expectation values
sz_gs     = expectation_value(gs,σᶻ()) # we have found the |↑↑...↑> or |↓↓...↓> state
E_gs      = expectation_value(gs,H₀,envs)

# time evolution Hamiltonian
Ht        = transverse_field_ising(;J=1.0,g=0.25);
Ebefore   = real(expectation_value(gs,Ht)[1])
dt        = 0.1

# Let's do one time step with TDVP
alg       = TDVP();
envs      = environments(gs,Ht);
(Ψt,envs) = timestep(gs,Ht,dt,alg,envs);
szt_tdvp  = real(expectation_value(Ψt,σᶻ())[1]);
Et_tdvp   = real(expectation_value(Ψt,Ht,envs)[1]);

# let's make a first order time evolution mpo out of Ht
Ht_mpo    = make_time_mpo(Ht, dt, TaylorCluster{1}());

(Ψt,_) = approximate(gs, (Ht_mpo, gs), VUMPS(; verbose=false));
szt_tmpo  = real(expectation_value(Ψt,σᶻ())[1]);
Et_tmpo   = real(expectation_value(Ψt,Ht)[1]);

@show szt_tdvp-szt_tmpo
@show Ebefore-Et_tmpo
@show Ebefore-Et_tdvp;
[ Info: calclw failed to converge 3.461394966703175e-14
szt_tdvp - szt_tmpo = -2.0266834715787496e-5
Ebefore - Et_tmpo = 1.2840946593151159e-5
Ebefore - Et_tdvp = 1.4748136123454714e-6

We see that \(<σᶻ(t)>\) for both methods after one timestep are reasonably close, but that the energy (density) is (more) conserved for the tdvp method. If we were to do the time evolution for many timesteps and for different orders of time evolution MPO we would end up with the following plot

../_images/TimeEvolution.svg

We clearly see that increasing the order of the time evolution mpo improves the result.

# We can also find the groundstate using imaginary time evolution
H        = transverse_field_ising(;J=1.0,g=0.35);

# Here we will do 1 iteration and see that the energy has dropped
Ψ        = InfiniteMPS([2],[10]);
Ψenv     = environments(Ψ,H) ;
Ebefore  = real(expectation_value(Ψ,H,envs)[1]);
(Ψ,Ψenv) = timestep(Ψ,H,-1im*dt,TDVP(),Ψenv);
Eafter   = real(expectation_value(Ψ,H,Ψenv)[1]);

Eafter < Ebefore
true

If we were to do this for many iterations the energy of the evolved state eventually reach that of the ground state as shown below.

../_images/ImagTimeEvolution.svg