In the last two decades, quantum simulation experiments have become a valuable tool in the study of strongly correlated quantum many-body systems. Typical measurements from quantum simulation experiments such as quantum gas microscopes or superconducting qubits consist in quantum snapshots of the system, which provide information of the underlying quantum many-body state with a resolution of a single lattice site. Conventionally -- based on what's accessible in solid state experiments -- one- and two-point correlation functions are evaluated. Quantum snapshots provide however much more information, and it is an exciting new challenge to figure out ways to make the most of that information.

In our research, we apply techniques established in the machine learning community to study quantum many-body systems and to make the most use of all available data, with the goal of an unbiased and interpretable analysis.

How can one call a system thermalized? When local observables, say for example the magnetization, have reached their thermal value? But what about two-point correlations? And maybe most importantly: how can we check without any prior knowledge what to search for? In [1], we study thermalization in the one-dimensional Bose-Hubbard model using snapshots from a cold atom experiment. In particular, we train a neural network to distinguish the current time step during the time evolution from the thermal state. If the system is thermalized, the snapshots from the time evolved state will look a lot like the snapshots from the thermal density matrix. This means on the other hand that the network's performance will be bad, since it will be hard to distinguish the two datasets.

Another question we tackled based on snapshots concerns the Fermi-Hubbard model: given the ground truth -- in our case the experimental data from a quantum simulation experiment -- how can we decide which effective theoretical description captures it best? In [2], we applied machine learning: we train a neural network to distinguish snapshots -- as they would be measured with a quantum gas microscope -- of two different theories. After training, the network parameters are fixed. The neural network only knows the two options: theory A (in our case a resonating valence bond state as proposed by Nobel laureate Phil Anderson) or theory B (the geometric string theory). Next, we used experimental snapshots as input. The network can only assign one of the two theory labels. The resulting classification thus directly tells us which theory describes the experiment better, taking all available information (and that means in principle arbitrary higher order correlators, which can be extracted from the snapshots, see below) into account!

In order to gain insights into what the neural network is looking for, so basically top open the black box, we created a new, physics inspired network architecture, the correlator convolutional neural network in [3]. Replacing the standard non-linearity by an expansion in orders of the correlation function, this network allows for full interpretability and in particular tells us which (higher-order) correlation functions are the most informative for the classification task at hand.

By looking not only at the mean values of correlations, but also their full counting statistics -- which is available through a dataset of snapshots -- the correlator convolutional neural network can even probe long-range correlations, as done in [4] in order to find a qualitative change in snapshots of the 2D Heisenberg model as the temperature is tuned.

There are many reasons to reconstruct a Hamiltonian, such as benchmarking a quantum device or numerical method, or finding parent Hamiltonians.

In [1], we use Hamiltonian reconstruction to gain physical insights into a strongly correlated quantum system. We consider a mixed-dimensional t-J model, where the charge sector can be straightforwardly integrated out. The resulting state is then the finite temperature state with only spin degrees of freedom. By reconstructing the effective Hamiltonian of the spin sector, we gain valuable insights into the effect of the charge degrees of freedom on the spins, and find in particular that the motion of charge carriers induces a J2-coupling in the spin sector, which brings it into the highly frustrated regime of the J1-J2 model.

Representing quantum many-body systems, and finding target states, such as ground states of interesting Hamiltonians, is a challenging problem, as the dimension of the Hilbert space grows exponentially. Recently, neural networks have emerged as a powerful numerical tool to represent quantum many-body states. Ground states for example can be found by variational Monte Carlo.

In [1], we use a restricted Boltzmann machine (RBM) to represent a quantum state. In particular, we aim to perform state reconstruction, where a quantum state is reconstructed based on measurements. In any realistical setting, the number of measurements will be smaller than the strictly available number of degrees of freedom. We implement an active learning algorithm, where we choose the next measurement based on the information available from all previous measurements. This allows us in each step to perform the measurement that yields the largest information gain, thus increasing the information (or vice versa, decreasing the necessary number of measurements).

[1] "Adaptive Quantum State Tomography with Active Learning" -- Lange et al, arXiv:2203.15719