Adaptive Quantum State Tomography with Active Learning

Recently, tremendous progress has been made in the field of quantum science and technologies: different platforms for quantum simulation as well as quantum computing, ranging from superconducting qubits to neutral atoms, are starting to reach unprecedentedly large systems. In order to benchmark these systems and gain physical insights, the need for efficient tools to characterize quantum states arises. The exponential growth of the Hilbert space with system size renders a full reconstruction of the quantum state prohibitively demanding in terms of the number of necessary measurements. Here we propose and implement an efficient scheme for quantum state tomography using active learning. Based on a few initial measurements, the active learning protocol proposes the next measurement basis, designed to yield the maximum information gain. We apply the active learning quantum state tomography scheme to reconstruct different multi-qubit states with varying degree of entanglement as well as to ground states of the XXZ model in 1D and a kinetically constrained spin chain. In all cases, we obtain a significantly improved reconstruction as compared to a reconstruction based on the exact same number of measurements and measurement configurations, but with randomly chosen basis configurations. Our scheme is highly relevant to gain physical insights in quantum many-body systems as well as for benchmarking and characterizing quantum devices, e.g. for quantum simulation, and paves the way for scalable adaptive protocols to probe, prepare, and manipulate quantum systems.

Since the turn of this century, the characterization of quantum states is of high importance: It is needed for assessing the performance of quantum algorithms for quantum computers, certifying the quality of experimental quantum hardware, and enables the investigation of complex quantum systems [1,2,3,4].Hence, Quantum State Tomography (QST), which is the process of reconstructing quantum states from measurements, is of high interest for both experimental and theoretical research in the field of quantum physics and quantum computing.The very nature of quantum states makes the state reconstruction a complicated task and available techniques that use a full parameterization of the wave function / density operator, e.g.linear inversion or the maximum likelihood method [5,6], undergo an exponential scaling of the number of samples needed for the reconstruction with the size of the quantum system to achieve a desired threshold accuracy, which is due to the exponential growth of the Hilbert space with system size.Therefore the application of conventional tomography methods is mostly limited to only a few qubits [7].Other works circumvent this problem by exploiting certain known state properties, e.g.MPS tomography [8,9].
A relatively new approach is neural network quantum state tomography.As shown by Carleo and Troyer [10], neural network representations of quantum many body wave functions are possible in many cases of interest, which can reduce the complexity to a tractable number of parameters.When training the neural quantum states on measurement data, the neural network weights capture specific structures in the input We show our active learning cycle: In the first stage a reference measurement basis is chosen which is most suitable for reconstructing the state (grey).Then, based on the already measured data, the active learner requests specific highly informative samples (blue).These are used to train a RBM representing the reconstructed quantum state.
data and generalize to the full state representation [10,11,12,13,14].Furthermore, neural network representations can overcome limitations of non machine learning approaches, e.g.restricted Boltzmann machines can encode quantum states with an entanglement structure that goes beyond the area law due to their non-local connections between hidden and visible nodes that will be presented later in the text [15,12], in contrast to matrix product state (MPS) representations of quantum states that obey the area law of entanglement [16].It has been shown that neural network based approaches allow the reconstruction of highly entangled states with more than a hundred qubits from a limited number of measurements, for example by using a restricted Boltzmann machine (RBM) [11,12].Other neural network representations of quantum states include among others recurrent neural networks [17,18,19], variational autoencoders [20], convolutional neural networks [21], generative adversarial networks [22] and transformer architectures [23,24].Furthermore, machine learning based tomography can allow accessing observables which cannot be directly inferred from an experiment itself [25].
Here we show that it is possible to further re-duce the number of required measurement samples by combining restricted Boltzmann machines with an adaptive active learning scheme as shown in Fig. 1.Active learning (AL) machine learning models are able to interact with their environment.In the case of the QST scheme we propose here, this interaction comprises of requesting samples which are most informative for the further learning process [26].Active learning has been applied successfully, e.g. in improving classification tasks [27,28] or speech recognition [29], in computational physics to speed up calculations of multidimensional functions [30], in condensed matter theory to map out a phase diagram [31], and in quantum experiment design [32].Here, we use AL in the sense that our model decides actively which measurement configuration to consider for consecutive measurements in QST to improve the training most efficiently.The set of measurements is hence adaptively optimized during the training, which has already been shown to bring advantages for full quantum state tomography of very small systems [33,34,35,36,37].Existing protocols, however, are often not scalable to larger systems for various reasons, for example because either the reconstruction of the quantum state or the procedure to obtain the next measurement basis become numerically intractable.Another obstruction is the optimal measurement basis itself: some adaptive protocols are based on performing measurements along the direction of the current reconstruction, which can be highly entangled many-body states, and thus extremely challenging to implement.Our protocol overcomes these challenges by combining efficient state tomography based on neural networks with an active learning scheme relying on single qubit rotations only.We would like to point out that our method is designed to reduce the sampling complexity, but not the reconstruction complexity.The latter relies on classical resources, and is hence limited to quantum systems that can be simulated classically.
In this article we show that active learning can strongly reduce the number of samples needed for the reconstruction compared to a passive procedure relying on RBMs only.We investigate quantum states consisting of up to 19 qubits, or spin- 1  2 particles, which are either generated on IBM quantum devices or simulated using the density-matrix renormalization group [16].Ex-tensions of our AL scheme to other settings, such as fermionic systems or soft-core bosons, should be possible.
This article is organized as follows: we start with a brief summary of existing QST schemes by restricted Boltzmann machines, which constitute a building block of our improved active learning QST.We then introduce our active learning algorithm in section 2. In section 3, we define and motivate the different quantum states considered in this work, in particular multi-qubit states with variable degrees of entanglement and ground states of a one-dimensional XXZ model and a kinetically constrained spin chain with a hidden U(1) symmetry.We present the results of our active learning scheme for quantum state reconstruction in section 4 and conclude in section 5.

Restricted Boltzmann Machines
Since our AL scheme also includes the training of a committee of restricted Boltzmann machines, see Fig. 1, we will introduce the latter with a focus on their application to QST in this section.We would like to emphasize that the idea of our AL scheme is independent of the specific representation of the reconstructed state and can in principle be applied in combination with any quantum state tomography scheme, such as other neural network architectures [20,17,21,22,23] as well as matrix product state based state reconstruction [38,8].
Here, we use the implementation of RBM quantum state tomography by Beach et al. [39] in the form of the python package QuCumber.RBMs are capable of representing highly entangled quantum states [11,40] and can in principle be extended to deep Boltzmann machines to increase their expressivity [12].Furthermore, the RBM ansatz can be supplemented with an additional hidden layer to represent mixed states with arbitrary degree of mixedness, as shown in Refs.[41,12,42].In Ref. [41], this purification scheme has already been applied successfully to experimental measurements.Here, we restrict ourselves to the conventional RBM architecture representing pure states, with the limitations as discussed e.g. in Refs.[40,43].Hereby, the target manybody quantum state is represented in terms of two RBMs with weight vectors λ and µ which define the amplitude and the phase of the reconstructed state, i.e.
where x labels a general set of basis states.Details on the RBM wave function can be found in Appendix A.
A RBM consist of layers of so-called visible and hidden nodes v i and h j with bias weights b i and c j .A schematic representation is shown in Fig. 2. The visible nodes correspond to the input data, consisting of measurements of the state under investigation and given in terms of bit strings of zeros and ones.Both layers are connected and each connection is weighted by parameters W ij .To reconstruct a state and represent it in terms of a RBM, the network parameters are learned from a set of measurements.To this end, the weights b, c and W are adjusted such that the Kullback-Leibler (KL) divergence, is minimized 1 , which quantifies how close the reconstructed distribution p (λ,µ) = |ψ RBM (x)|2 is to the measured distribution q(x).Details on the training procedure can be found in Appendix A.
While the Kullback-Leibler divergence can be determined without prior knowledge of the target state 2 , a commonly used way of benchmarking of 1 The empirical distribution q(x) is defined as follows: if a possible outcome v is contained Nv times in the set of Ntot existing measurements D, then q(v) = Nv/Ntot.If x / ∈ D has not been measured, q(x) = 0.
a QST scheme is a direct comparison with the target state.In order to do so, a useful measure for the performance of the reconstruction is the fidelity which represents the square of the overlap of the RBM state representation and the target state [44].Note that the fidelity can only be evaluated if the target state vector is explicitly known.For multi-qubit systems, f is often re-scaled to f = f 1 N , with N the number of qubits, to account for the exponential size of the underlying Hilbert space.Another way to benchmark the performance of the reconstruction scheme is to compare observables like the density or correlators of the target and reconstructed states, which will be done in section 4.2.
To obtain information about both the amplitude and phase of the target state, different types of measurements are required: (ii) Samples from measurements in further, different basis configurations are required to extract information about the phase.Each measurement in a different basis configuration corresponds to a rotation of one or several qubits into the x, y or z basis individually (see Appendix A for the definition of the rotation matrices).
In the following, we will denote measurement configurations by C and the rotation to the respective measurement configuration by R C .

Active Learning Algorithm
Depending on the properties of the quantum state under consideration, there are different configurations which contain relevant information and are hence most useful for the reconstruction [34].For most states it is difficult and time-consuming to obtain the optimal set of measurement configurations and amount of samples which leads to a good reconstruction (see Appendix B.1).Here, we apply AL to choose the configurations during  6).The shown calculation is performed separately for each configuration x i .For simplicity, we show the procedure for real-valued wave functions and refer the reader to the text for the explanation of the full AL scheme that includes the states' phase structure.
the learning process that contain most new pieces of information about the state under consideration.Note that although the choice of quantum state representation in terms of an RBM comes with the limitations discussed above, our AL scheme itself does not make any assumptions on the type of state or its properties and is not restricted to the specific choice of neural wave function representation.Within our scheme, the configurations are chosen such that the amount of information contained in a finite number of measurements from this configuration is larger than for other configurations.We show that by requesting this specific, highly informative data, the total number of samples can be reduced while the accuracy of the machine learning model is increased compared to a procedure which uses RBMs only.
The choice for the next measurement configuration is the core of our active learning algorithm.It is made by applying the so-called query-bycommittee strategy from active learning theory [45,46,26].Hereby a committee of n RBM models (here RBMs) Θ = {θ 1 , . . .θ n RBM }, initialized with different parameters, is trained on the same data.In each active learning step, each member of the committee Θ casts its vote on how the true wave function should look like, based on their learned quantum state representations {ψ RBM,1 , . . ., ψ RBM,n RBM }.The committee selects the measurement configuration C * for which all models disagree most on the training outcomes As shown in Ref. [46], the degree of disagreement among the committee members can be seen as an estimate for the information value.A common strategy to calculate the level of disagreement is the Kullback-Leibler divergence to the mean [47,48], which measures the distance between the probability distributions {P θ 1 , • • • , P θn RBM } based on the Kullback-Leibler (2) divergence as a distance measure between the probability distributions of each model, P θ , and the average distribution of all models P Θ with P Θ (x) = 1 n RBM θ∈Θ P θ (x).Then, the configuration C * that maximizes the distance is selected.Here, we adopt the same strategy, but with a different distance measure given by the Jeffrey's distance [49] instead of the KL to measure the distance between the probability distributions {P θ 1 , • • • , P θn RBM }.In contrast to the KL the Jeffrey's distance has symmetric contributions for both P θ (x) ≤ P Θ (x) and P θ (x) ≥ P Θ (x) since no logarithm is involved, ensuring that all members of the committee have equal votes.More precisely, which can be brought to the form with the empirical variance var 2 of the wave functions amplitude |ψ RBM (x)| = P θ (x).The calculation of this term is shown in Fig. 3.It has been shown that query-by-committee algorithms work in some cases already for very small committees with only up to three members [46,47,26].In order to take complex wave functions into account, we calculate Eq. (6) for the phase part of ψ RBM as well and normalize the amplitude and phase variance by the respective L1 normalized values.Depending on the size of the Hilbert space, the calculation can be done by comparing the full state vectors (i.e.calculating the variance of ψ RBMi (x) for every basis state x) or by sampling from each of the n RBM RBM distributions.
A schematic overview of our complete active learning algorithm is provided in Fig. 1 or z basis and hence 3 N rotations (denoted by {R C }) come into question.We apply the query-by-committee strategy as described above to allow an efficient choice of C * .To this end, we rotate the RBM representations using the rotations {R C } and request samples from those configurations for which the RBMs disagree most.The level of disagreement is calculated using eq.( 6) for the amplitudes and phases separately.In order to make them comparable, we normalize by the absolute (L1 normalized) value of the amplitudes and variances, respectively.Note that in this step the rotations {R C } will be applied to the RBM representations of the wave functions and not to the actual quantum state, which can be done in a much more efficient way.
a) If the RBMs disagree more on the amplitudes than on the phases of the reconstructed states (i.e. the variance of the amplitudes is larger than the variance of the phases of ψ RBM for different RBMs), we skip the full query-bycommittee procedure and request measurements from the reference basis.
b) Else, the probability distributions tions for all i = 1, . . ., n RBM RBMs are used to determine the best choice for the next configuration C * as described above (with N the system size).
5.) Samples in the requested configuration are measured and steps 3 and 4 are repeated.

6.)
The training is stopped when the error of observables from the target and RBM wave functions drop below a pre-defined threshold value or when a maximum number of queries is reached.The stopping criteria for the states under consideration are specified in the respective sections.
Furthermore, our AL-QST scheme automatically requests samples from the two other configurations beside the reference configuration mentioned in 1 if the reference configuration is requested two times in a row.For example, if the reference configuration is the zz . . .z configuration and measurements from this configuration are requested two times in a row, samples from the xx . . .x and yy . . .y configurations will be added in the next learning cycles.This is particularly relevant for rotationally invariant systems like the Heisenberg chain.Hereby, no new measurements are needed since the discarded measurements from 1 can be used.
Note that step 4b is a bottleneck of our approach, since the computational effort to calculate the variance for all possible configurations scales exponentially with N .However, this scaling can still be favorable for relative large system sizes since it only concerns the RBM representation of the target wave function, and not the number of measurements or measurement configurations from the experiment.Furthermore, we show that our AL scheme still improves the reconstruction when it is applied to a randomly chosen subset of all possible configurations.Potential ways to overcome the exponential scaling are discussed at the end of this paper.
The reference measurement configuration chosen in step 2 is selected in a similar way as the new configurations are chosen in step 4.However, the important difference to the query-by-committee strategy as in step 4 lies in the fact that a good reference configuration C ref should be the best available staring point for the training procedure.Hence, we want to select C i with the best estimate of the target wave function.In a similar spirit as in step 4, we train all members of the committee, but on each data set {x z...z }, {x x...x } and {x y...y } obtained in step 1 separately and, in contrast to step 4, we choose the candidate C i where the n RBM RBMs agree most on the wave function state vectors, i.e.where the variance is the lowest.
The reference to an implementation of our algorithm is provided at the end of this paper.

Considered States
Before we present results by our AL-QST scheme, we provide an overview of the quantum states used to benchmark our method.We chose two types of states: first a set of generic qubit states with variable degree of entanglement, which provide direct insights into the performance of our scheme.Second, we consider many-body ground states of one-dimensional spin Hamiltonians as an illustration of our method for quantum simulators.

Qubit states and IBM's Quantum cloud
We investigate the reconstruction of states which are generated on real quantum devices and classical simulators of the IBM Quantum platform [50].IBM provides 21 quantum systems based on superconducting qubits which can be accessed via a cloud.Systems with up to five qubits are accessible for non-internal users with a free account.Using Qiskit it is possible to implement quantum algorithms on these quantum systems [51].
Here, we consider the reconstruction of four target states: Greenberger-Horne-Zeilinger (GHZ) states, GHZ states with a complex phase, polarized product states with all qubits set to one (spin chains with spins pointing in z direction), and states with a state vector with equal amplitudes for all components.They correspond to spin chains with all spins pointing in x direction, e.g. for two qubits Using the IBM platform we investigate the reconstruction of states with five qubits.Tools for rotating and measuring the quantum systems are provided by Qiskit.An example for measuring a two-qubit system in the xy basis (first qubit rotated to the x axis, second qubit rotated to y) is shown in Appendix A. In the following sections we will use the same notation, where e.g.xx . . .x denotes a system with all qubits rotated from the z to the x axis.

XXZ and Heisenberg states
As a second example, we use our AL-QST scheme to reconstruct ground states of the XXZ Hamiltonian where Ŝµ i(j) denotes the µ ∈ {x, y, z} component of the spin-1/2 operator at site i(j) = 1, . . ., L. This model encloses ground states with a strong polarization in z direction for large |∆| (broken Z 2 symmetry) to critical states without longrange order for −2J ≤ ∆ ≤ 0. For ∆ = 0 the ground states are SU(2) invariant (Heisenberg antiferromagnet).In what follows we will compute the ground states of Eq. (11) using a matrix product state representation with the SyTen package [52,53].

Kinetically constrained spin chain
As an additional illustrative example, we apply AL-QST to reconstruct ground states of a kinetically constrained one-dimensional spin chain (KCS) model.It is described by the following Hamiltonian [54,55,56]: where Ŝµ j denotes a spin-1/2 operator (µ = x, y, z) on site j = 1, ..., L. This model has a very interesting interpretation, where the spin domain walls in the x-basis correspond to particles on a dual lattice.More precisely, model (12) can be mapped to a one-dimensional Z 2 lattice gauge theory with matter [55], see Appendix C. We choose the system in Eq. ( 12) due to several non-trivial properties for which we can check in the reconstructed state.These include: (i) that the underlying excitations are domain walls of the spins, extending beyond one site -a fact that needs to be captured by a reliable QST scheme; (ii) the Hamiltonian features a hidden U(1) symmetry describing the conservation of the total number of domain walls -their number is controlled by the chemical potential µ; (iii) the model hosts gapless Luttinger liquids with significant amount of non-local entanglement, presenting a general challenge for any QST scheme; and (iv) by tuning the effective field from h = 0 to h ̸ = 0, a confinement-deconfinement transition exists where the nature of constituents of the Luttinger liquid changes, as indicated by a change of the Fermi momentum k F and reflected in the period of Friedel oscillations [55,56].
Below we compute the ground states of Eq. ( 12) with the density-matrix renormalization group (DMRG) using the SyTen package [52,53].We obtain a MPS representation of the ground states, from which efficient snapshot sampling is possible [57], including in variable bases [58].This provides us with the data required to run and benchmark the AL-QST algorithm.
To probe how close the reconstructed state is to the actual ground state, we compute the following observables.We start with the local domain-wall density, This also immediately leads us to the conserved total system density ntot = 1 In practice, we find it convenient to define a vector n of local densities with the following entries, In order to probe the confinement of domain walls we consider their non-local equal-time Green's function measured relative to the center L/2 (for L odd: L/2 + 1/2) of the chain (see Appendix C for more details): We highlight the following key properties of this function: (a) for distance d = 0, the density is recovered, c(0) = ⟨n(L/2)⟩; (b) the decay of c(d) allows to distinguish between confined (exponential decay) and deconfined (power-law decay) regimes.

Results
The active learning curve for an exemplary state, namely a GHZ state with phase structure consisting of 5 qubits and generated with the IBM quantum simulator, is shown in Fig. 4. The results for AL tomography are compared to QST without active learning (denoted baseline) by using the pure RBM reconstruction of QuCumber with equally many, but randomly chosen measurement configurations, and the same number of 0.1 samples as for AL (without taking the discarded data from step 1 into account in both cases).Note that we consider the number of samples and configurations at the end of the AL scheme for the baseline reconstruction.
Hence, the results for AL and baseline can differ already at the beginning of the training.The pool of samples which is used for the RBM training is the same for all RBMs (both for AL and baseline, respectively).Fig. 4 shows the learning curve for the AL scheme as presented in Fig. 1.After the selection of the reference configuration (zzzzz, not shown in Fig. 4) the training starts with 100 samples drawn from the reference basis, which are fed into the QuCumber state reconstruction as implemented by Beach et al. [39] (see step 3).In the subsequent learning sequence up to epoch 1000, the re-scaled fidelity f 1/N increases (1 − f 1/N de-creases) and the Kullback-Leibler divergence KL decreases.However, the samples measured in step 1 of the active learning routine do not contain enough information to decrease 1 − f 1/N below a threshold value of 10 % for all RBMs (see first section in Fig. 4).The lack of information is mostly due to the fact that no information on the phase structure of the state under consideration is contained in the measurements from the zzzzz configuration.
After that, more and more samples are added to the pool of measurements which are requested by the learner and the RBM learning process is re-started (with the same random initialization of the RBMs as before).For the state reconstruction presented in Fig. 1, the learner requests a measurement from the xzzxx configuration in the first 6 queries (see epochs 1000 to 6000), and from zyyyz in the last learning cycle starting in epoch 7000.Note that samples from these measurement configurations contain not only information on the amplitude, but also on the phase structure of the state under investigation.Step by step, these samples are added to the learning process.This process is stopped when a specified re-scaled fidelity (here f 1/N stop = 90 %) is reached or the number of posed queries exceeds a maximal value N max query (in this paper we use N max query = 30).The final fidelity in this example is f 1/N = 95.0 ± 1.5 % (average over the six RBMs used for step 3) using only 107 samples and three different configurations.Note that we restart the learning from randomly initialized weights in each AL step and not from the trained models from the first steps to avoid a bias from the incomplete data, yielding the periodic peaks in fidelity and KL.Furthermore, it can be observed that there are regimes in the learning procedure where the difference between reconstructed and target state stagnates or even grows when adding the requested samples.This is not unintentional within our learning scheme: If a new sample is requested that contains completely different information (i.e. when adding the first sample from the xzzxx configuration to a pool of 100 samples from the zz . . .z configuration in epoch 1000) the new information cannot be successfully incorporated into the context of the data from previous AL steps in the first place.However, when requesting more and more samples afterwards, the same piece of information can become a valuable contribution to a good reconstruction.
A learning procedure without AL with the same number of measurements and equally many, but randomly chosen configurations (which will be called baseline in the following) ends with f 1/N baseline = 75 ± 11 % (average over all RBMs, trained with the same samples), indicated by the green dotted line in Fig. 4. Hence, the quality of the reconstruction was improved strongly by the use of active learning.Furthermore, the span of final fidelities for different RBMs is significantly decreased compared to a learning procedure without active learning (visualized by the green band in Fig. 4), which makes the results much more reliable than without AL.Note moreover that besides 1 − f 1/N also the KL is reduced compared to the baseline, i.e. the final RBM reconstructions using AL can capture the given dataset in a more efficient way than without AL.
In the following sections, the results for the active learning procedure as explained in section 2 will be presented for states on IBM's quantum devices and states generated with DMRG.The number of samples N tot , number of queries N queries , number of samples per query N per query and number of different configurations N conf are presented in Tab. 1 and 2. Based on our experience that for a good reconstruction of the amplitude in most cases more information about the reference configuration is needed (compared to the reconstruction of the phase based on locally rotated configurations), we request more samples than N per query (if not stated differently 3 • N per query ) if the reference configuration is requested.

Tomography results for quantum states on IBM's Quantum cloud
In this section the results for states defined in Eqs.(7) to (10) generated on IBM's classical quantum simulator, which is designed to mimic the execution of an actual device, and on real quantum devices are presented (for more details see appendix B.1).We consider a system size of 5 qubits.The reconstruction results can be found in Fig. 5.Here the training is stopped as soon as f  8), ( 9) and (10) with 5 qubits generated on classical quantum simulators (left) and real quantum devices (right).An exemplary learning curve can be found in Fig. 6.
may differ from the actually prepared state on the device.Consequently, the reconstruction fidelity is limited by the preparation errors and we set f When reconstructing the GHZ state with AL, the active learner selects the zzzzz basis as the reference basis.The fidelity of the GHZ classical quantum simulator states is improved by around 25 % from f 1/N = 70.49± 0.10 % without AL to f 1/N = 95.667± 0.001 %.For real quantum devices the quality of the reconstruction improves as well, by around 22 % from f 1/N = 68.6 ± 8.3 % without AL to f 1/N = 90.93 ± 0.16 % with respect to the theoretically ideal state.Moreover, the standard deviation of the results for different RBMs is lowered by up to two order of magnitude for the classical quantum simulators and quantum devices, which makes the results more robust when using AL.This improvement is achieved by requesting samples from only two additional measurement configurations.
In Fig. 6 the learning curve for the GHZ state is shown.When using AL the fidelity and KL divergence are decreased strongly at the end of the training.One can see that in the first learning cycle, which uses 60 samples drawn from the zzzzz reference basis, the information contained in these samples is not enough to decrease the fidelity below the threshold value of f  7)) with 5 qubits generated on a classical quantum simulator by IBM.Both 1 − f 1/N (with f 1/N the fidelity) and the Kullback-Leibler divergence KL are reduced when using AL, compared to a reconstruction with the same number of measurements, but randomly chosen measurement configurations.
put the piece of new information in the context of the old measurements at first and and the infidelity increases compared to the first learning step.However, one can see that the RBMs still adapt to the new information since the KL is decreased to a comparable amount as in the first part of the learning.In the second query one sample from xxyxy is added.Together with the other samples, the information contained in the measurements is enough to adapt the phase of the reconstructed wave function such that the fidelity increases above the threshold value.
For the GHZ state with phase structure the AL-QST scheme also requests additional measurements in 2 (5) additional measurement configurations for states generated with the classical simulator (real quantum device).In contrast to the GHZ state without phase structure, much more information is contained in these configurations, namely the phase difference of |0 . . .0⟩ and |1 . . .1⟩.Our scheme is able to capture the importance of measurements from other configurations than the reference configuration and requests many of the configurations several times until the the learning is stopped after 7 (25) queries.
The AL-QST scheme improves the reconstruction results to a fidelity of f 1/N = 95.0 ± 1.5 % (87.5 ± 4.3 %) compared to the baseline with 75 ± 11 % (61 ± 13 %) for additional measurements from randomly selected configurations.The learning curves for states  8)) with 5 qubits generated on a quantum device of IBM.Both 1 − f 1/N (with f 1/N the fidelity) and the Kullback-Leibler divergence KL are reduced when using AL, compared to a reconstruction with the same number of measurements, but randomly chosen measurement configurations.generated with the classical simulator and on a real quantum device are shown in Figures 4  and 7.In both cases it can be seen that adding samples which contain more information on the phase structure can confuse the learner at first place (see epochs 1000 to 2000 in both figures), but when enough information on the phase is added, the reconstruction improves again.For the state generated with the classical simulator (see Fig. 4) this happens by adding samples from the xzzxx and xyyyx configurations.For the states generated on the real devices five additional measurement configurations are needed.
For the state with all spins pointing upwards (all qubits having value one), the reference basis containing the most valuable information about the state is the zzzzz basis.When using AL, this configuration is selected and the results are extremely good already in the first cycle of AL, with a fidelity of f 1/N = 99 % for simulator and real device.The QuCumber reconstruction without AL coincidentally uses the zzzzz reference basis by default and hence coincidentally the perfect reference basis for the reconstruction of this state.Therefore, no difference between AL compared to the baseline can be observed.
For the state with all spins pointing in x direction, the reconstruction results are improved as well.Here, the xxxxx basis is chosen and the fidelity is increased by 14 % to f 1/N = 98.841 ± 0.022 % for simulated states ( f 1/N = 85.28 ± 0.40 % for real quantum devices) compared to a theoretically ideal state.Furthermore, the variance is decreased by a factor of up to 10.In contrast to the GHZ state, where the improvement is achieved by increasing the fidelity step by step with every query, for this system the underlying reason for the improvement is the rotation of the reference configuration: When the x-spins state is rotated from the zz . . .z configuration to the xx . . .x configuration, the measurement distribution changes from equally distributed peaks for all outcomes to a peaked distribution at 00 . . .0. Similarly to the state with all spins in z direction, it is relatively easy for the RBMs to learn this distribution.We would like to point out that the reconstruction fidelities of the real quantum states are also limited by preparation errors, which becomes much more prominent for the x-spin states in contrast to the z-spin states, since a rotation from the z to the x direction comes with additional preparation errors.This is also in agreement with the large difference of reconstruction errors for simulated and real x-spin states.

Tomography results for DMRG states
To investigate the AL reconstruction of manybody quantum states we can use the matrix product state framework for representing quantum states, such as for example ground states of the XXZ and the KCS models, and sampling in different basis configurations.In this section, the AL results for these states with 5 to 19 qubits are presented in Figs. 8 to 13, with the number of samples and configurations used for the reconstruction from Tab. 2. For all states a committee of four RBMs was used.

Reconstruction of XXZ and Heisenberg States
In figure 8

Ŝα
i+1 ⟩| is at least 2/3 of the target value and the correct sign is obtained.It can be seen that for ∆ = −1, 0, 1 the AL-QST scheme performs significantly better than the baseline, and for ∆ = 5 equally well.Furthermore, we emphasize that the RBM representation in general yields better reconstruction results for the amplitudes than for the phases of the considered states, which has consequences for the reconstruction of the XXZ states: Firstly, the RBM reconstruction is not able to capture the SU(2) invariance of the ∆ = 0 Heisenberg state, as can be seen e.g. also in Ref. [11].Secondly, a better reconstruction of quantities measured in the reference direction compared to the orthogonal directions is obtained for all values of ∆, e.g. the reconstruction of the correlator ⟨ Ŝα i Ŝα i+1 ⟩ for ∆ = 0 with zz . . .z being the reference configuration has a value closer to the actual value for α = z than for α = x, y since the values for the latter correlators are systematically underestimated by the RBM representation.A similar tendency can be observed for small ∆ like ∆ = −1 (1), where the reference configurations are selected by the AL-QST scheme to be  the xx . . .x (zz . . .z) configurations.Details on the AL-QST scheme can be found in Tab. 2. The reconstruction curve for for ∆ = −1 is shown in Fig. 9.This state has a small polarization in z direction compared to x and y directions.Our AL-QST scheme is able to capture this by selecting the xx . . .x configuration as the reference configuration and not the zz . . .z configuration.Consequently, as explained above, the reconstruction of the correlator in the reference direction ⟩ even has a wrong sign after the first learning phase up to epoch 3000.After that, the AL-QST scheme requests the reference (i.e.xx . . .x) configuration in the first step (see epochs 3000 to 6000), which does not yield a significant improvement of the reconstruction of ⟨ Ŝz i Ŝz i+1 ⟩.In the next two phases up to epoch 1200 the AL scheme as explained in section 2 selects the orthogonal configurations zz . . .z and yy . . .y (see epochs 6000 to 12000, which yields strongly improved results of ⟨ Ŝz i Ŝz i+1 ⟩.In the last three queries measurements from the xx . . .x, xxzxxzzx and xzyzzzzz configurations improve the reconstruction of the correlators in all directions up to the threshold values and the learning procedure is stopped.

Kinetically constrained spin chain model
In Figs. 10 to 13 the tomography results for the kinetically constrained spin chain with t = 1 and h/t = 0, µ/t = 0 or respectively h/t = 1, µ/t = 1 are summarized.For these states no full state vectors are available and hence the fidelity cannot be used to evaluate the quality of the reconstruc-  tion.Instead, we calculate the density and the correlator from Eqs. ( 13) and ( 15) of the reconstructed states as defined in Sec.The target and reconstructed density n target and n have N − 1 entries (for each possible domain wall between site i and i + 1).The spatially resolved domain wall densities for the kinetically constrained spin chain ground states with h/t = 0, µ/t = 0 and h/t = 1, µ/t = 1 are shown in Figs. 10 and 11 (bottom) for a system with 19 qubits.For h/t = 0 and µ/t = 0 the local target density is n i = 0.5.For h/t = 1 and µ/t = 1, the conserved total density is equal to n tot = 8/18 and one can observe Friedel oscillations, where the oscillations are proportional to k f = πn tot [55].For both target KCS model ground states the reconstructed states have a domain wall density which agrees with the target density in terms of magnitude and general characteristics (i.e.oscillations) when using AL.
In contrast, the baseline without AL results in underestimated local densities n i , which can be understood by the fact that the densities in Eq. (13) are defined using Ŝx i operators, which cannot be efficiently captured by the default z  reference configuration of the baseline RBMs.
The AL learning scheme naturally overcomes this problem since it chooses the xx . . .x configuration as the reference configuration.This selection of the reference configuration is only done based on the amplitudes and phases of the reconstructed wave functions at step 2 of the AL scheme.
A similar tendency can be observed for the target and reconstructed correlators c target and c.They have ⌊ N 2 ⌋ entries since we calculate the correlator over distance d for the site at the middle of the chain.Also here the baseline reconstruction without AL yields values of c(d) with a much smaller magnitude than for the target state for all distances d and both parameter sets (h/t = 0, µ/t = 0 and h/t = 1, µ/t = 1) for the same reason as for the densities.In contrast, when using active learning the reconstructed correlator values are of the same magnitude as for the target state and even follow local features (see bending of the curve in Fig. 11).Moreover, the power-law for h/t = 0 and exponential decay for h/t = 1 can be reconstructed when using AL, but not for the baseline (see Appendix B.2).
To conclude, for the KCS model with 19 qubits the reconstruction is improved drastically when using AL compared to the baseline scenario.This conclusion can also be drawn from the reconstruction of the KCS model states for other system sizes shown in Figs. 12 and 13.For h/t = 0 the change of reference basis yields a reduction of the density differences between target and reconstructed state by a factor 5 for AL, from a relative value of around 44 % to around 6 % when averaging over all system sizes.The difference of correlators is decreased from 69 % to about 17 %.
For h/t = 1 the difference in densities is decreased from an absolute value of around 61 % to less than 12 %.The difference of target and reconstructed correlators is decreased to around 16 % from 74 %.
For both values of h and all system sizes the active learner selected the xx . . .x configuration as reference basis.This is different from the choice for the usual RBM procedure implemented in Qu-Cumber which always uses the zz . . .z configuration as reference basis.For almost all sizes the stopping condition was reached within the first learning cycle in step 3 (except for the 15 qubit state (h/t = 0), where another measurement in the reference basis was requested).Hence, this change of the reference frame by applying active learning already improves the results extremely, even without a need for the further steps 4 and 5.

Summary and Outlook
In this work, we propose and implement an active learning scheme for adaptive quantum state tomography.The active learning scheme uses the information available in the already measured data to propose the basis configuration for the next measurement with the most possible information gain.Inspired from the query-bycommittee strategy the information gain is calculated by taking the variance of reconstruction outcomes for different members of the committee into account.We show that for a given number of measurements, our scheme provides a significant improvement in the reconstructed quantum state compared to a random choice of basis configurations.Our scheme brings the advantage that the information content of new measurement configurations is inferred from the variance of neural network quantum state representations based on previous measurements of the target state, and not from new measurements of the state under consideration, i.e. it allows to reduce the experimental effort.However, our implementation relies on the calculation of the variance for the set of potential next measurement configurations, scaling exponentially with the system size.This could be overcome by more advanced methods of exploring the space of measurement configurations, e.g.reinforcement learning, which will be considered in future work.Hereby, we imagine a reinforcement agent, trained to navigate in the space of all potential measurement configurations very efficiently, that selects possible candidates for next measurement configurations before applying our AL scheme.The active learning scheme is generally applicable to different quantum states and devices, such as trapped ions, neutral atoms in optical tweezers, and superconducting qubits, as shown here.With the increasing number of quantum devices, the need for an efficient way to characterize the realized quantum state arises.Applications range from the verification of quantum computing devices, e.g.testing how faithfully a given quantum  state can be prepared, to probing exotic states of matter realized in (analog) quantum simulators, such as the recently realized quantum spin liquid states [59,60], where measurements in different bases are necessary to characterize the quantum state.
Apart from the implementation of our protocol in an interactive experimental feedback loop, possible directions for future work include more advanced schemes, e.g. more possible reference bases in the first step.Our active learning scheme can furthermore be generalized to state representations other than the restricted Boltzmann machines considered here, such as variational autoencoders [20], recurrent [17] and convolutional neural networks [21], generative adversarial networks [22], and transformer architectures [23].
Another exciting future direction is the combination of the active learning scheme introduced here with the recently proposed classical shadows [61] by extending the possible active learning actions to different unitary gates, potentially involving two or more qubits.
A Quantum state representation in terms of a restricted Boltzmann machine Part of our active learning scheme is the training of a committee of restricted Boltzmann machines (see Fig. 1).In this work, the implementation of RBMs within the open-source software python package QuCumber [62] is used for the training.The package is designed to learn quantum many body wave functions from a set of projective measurements in different basis configurations by representing the reconstructed state by RBMs.In this section, we will present how to represent quantum states in terms of RBMs and more details to the RBM training.In this case, the representation of a quantum state in terms of a RBM is straight forward: For an infinite number of measurements in a reference basis, i.e. in the b = z basis, the measurements adhere to Born's rule and the probability of finding a measurement result x is QuCumber creates a RBM network with a probability distribution given by the Boltzmann distribution and (by summing over the hidden nodes) the distribution over the visible nodes where v and h are the visible and hidden nodes of the RBM and W ij the weights between visible node i and hidden node j [63] as shown in Fig. Hereby, only k iterations of Gibbs sampling are performed (contrastive divergence steps).For more details on the training procedure we refer the reader to Ref. [11].
For positive wave functions the RBM representation can be defined as with a normalization constant Z λ [62].
For more general wave functions with complexvalued coefficients like the probability distribution underlying the outcomes of projective measurements in the reference basis does not contain all possible information about the unknown quantum state, because the information about the phase is lost when only considering the underlying probability distribution of measurements in one basis q(x) = |Ψ(x)| 2 .In this case QuCumber represents the quantum state as defined in Eq. (1) and trains two RBMs with parameters λ and µ separately.The RBM with parameters λ models the amplitude of the RBM wave function, the second RBM with parameters µ the phase Figure 14: Measurement of a quantum system with two qubits (q 0 and q 1 ) in the xy basis configuration.Therefore q 0 is rotated from the z to the x axis by applying the Hadamard gate, and q 1 to the y axis by application of a combination of S and Hadamard gates (see equations (20) and ( 21)). Figure generated with Qiskit [51].
in the reference basis b, which is set to the z-basis by default.Secondly, measurements in other basis configurations are considered to determine the phase θ µ by training another RBM [11].
The information about the phase is extracted via the rotation of single qubits within a quantum circuit into another basis.QuCumber uses by default the z basis as reference basis and rotates to the x and y bases to extract phase information.The rotation of a single qubit to the x basis is achieved by applying the Hadamard gate the rotation to y by applying a combination of the Hadamard gate and the S-adjoint gate, The rotations are performed by using the predefined quantum operations of Qiskit as shown for an exemplary rotation from zz to xy in figure 14.For the DMRG states, we use and as local basis rotation matrices.The representation of a complex wave function in terms of the RBMs as explained above is implemented with the QuCumber package by using the ComplexWaveFunction method.

B Details of active learning QST
When using the AL procedure as described in section 2, within each cycle (step 3), 4 RBMs are used if not stated otherwise, with 1000 epochs, a learning rate of l = 0.07 and contrastive divergence steps k = 100.

B.1 IBM quantum states
For the generation of states on a classical quantum simulator we use the Aer simulator, which is designed to mimic the execution of an actual device [50].For real quantum states the devices ibmq bogota and ibmq quito were used, which both consist of 5 superconducting qubits.
In figures 15 and 16 the results for a pure RBM reconstruction of real quantum states (5 qubits) without AL are shown.In 15 the number of samples is varied by keeping the number of configurations fixed (6 configurations).In 16 the number of configurations is varied when fixing the number of samples (2000 samples).It can be seen that it is difficult and very time consuming to find the most efficient set of measurements and configurations by such scans.Furthermore, the number of samples and configurations with the best reconstruction varies from state to state.This makes our active learning scheme very appreciable since it chooses the number of samples and configurations on its own and only the number of samples at the beginning and per query have to be fixed by the user.As can be seen from section 3 equally good results can be obtained for all states when setting the number of samples per query to 1 or 2. Hence, the only free parameter is the number of samples at the beginning of the AL.

B.2 DMRG states
For lattice gauge model states we consider states with µ = 10 −7 ≈ 0, since the convergence is much faster than for µ = 0.
In Fig. 17 the learning curve for a lattice gauge model state with 19 qubits and h/t = 1 is shown.When using AL divergence, the xxxxxxx configuration is chosen to be the reference basis.One  In Figs.18 and 19 the local correlators over distance d (top) and densities over the system size are shown for KCS states with 7 qubits and h/t = 0, µ/t = 0 and h/t = 1, µ/t = 1 respectively.Similarly to the results for 19 qubits in Figs. 10 and 11 one can see that the agreement with the target state is improved when using the AL reconstruction scheme.Also here, features like the curvature of the correlator can be reproduced for AL, but not for the baseline.For the parameters h/t = 1 and µ/t = 0 the density increases to a maximum in the middle of the chain.Even though this behaviour can be reproduced by AL and baseline, the results for the baseline are of a around half of the magnitude of the target state.In contrast, the AL reconstruction yields densities with magnitudes much closer to the target state.
Moreover, when plotting the correlators for h/t = 0 (h/t = 1) with logarithmic scales for x and y axes (y axis) as shown for a system with 19 qubits in Fig. 20 (Fig. 21) one can observe the  Figure 20: over distance for the KCS model state with h/t = 0, µ/t = 0 and 19 qubits for target state (black) and the reconstructed states using AL (blue) and the baseline (green) as in Fig. 10 but with logarithmic scaling for x and y axes.
expected power-law (exponential) decay for the target state.This power-law (exponential) decay is reconstructed when using AL, but not for the baseline.
C Kinetically constrained spin model and Z 2 lattice gauge theory In this section we provide more background information on the kinetically constrained quantum spin model, see Eq. (12), considered in the main text.
Here â † j is the hardcore boson creation operator, defined on site j, and nj = â † j âj is the local number operator.The Z 2 gauge and electric fields are represented with the Pauli matrices, defined on the links between neighboring lattice sites, as τ z ⟨i,i+1⟩ and τ x ⟨i,i+1⟩ respectively.This model is appealing since it exhibits confinement of dynamical particles which is induced by any nonzero Z 2 electrical field term h ̸ = 0 [55,56].
The generator of the local Z 2 gauge symmetry of Eq. ( 24) can be written as [ From here it is relatively straightforward to obtain back the constrained spin model Eq.(12) from the main text.One notices that the charge configuration is entirely determined by the Z 2 electric fields due to the constraints imposed by the Z 2 Gauss law.This allows to formulate the Hamiltonian entirely in terms of the gauge field, by identifying the presence of a particle on a lattice site as anti-alignment of the Z 2 electric field defined on the links connecting that site.This also means that the particles, or domain walls, are connected with the Z 2 electric fields of the same orientation, which we interpret as strings and anti-strings which connect the Z 2 charges.
Using the above interpretation it is straightforward to see that the first term in model (12) corresponds to the kinetic term, where the number of domain walls is conserved.This ensures two things in the lattice gauge interpretation: the Z 2 electric string remains attached to the hopping particle, and the total number of particles is conserved.The second term ∝ h in Eq. ( 12) induces energy cost to strings and as a result confines the particle pairs into dimers.Finally, the Ising term is needed to control the number of domain walls and thus the filling of the chain in the 1D LGT interpretation.

C.2 The gauge-invariant equal-time Green's function
The correlation function in Eq. (15) can be mapped to the Z 2 invariant equal-time Green's function defined as This is once again done by taking into account the constraint imposed by the Gauss law in Eq. (26).
Both terms ∝ 1 ± 4 Ŝx l Ŝx l+1 , l ∈ {i, j} act as projectors to the states where there is a site with a particle to be annihilated and an empty site, where a particle can be created.The actual annihilation at site j and creation of the particle at site i comes from the application of l 2 Ŝz l between the domain walls.Combining both terms, first applying the projectors and then performing the annihilation and creation of the particle, gives us the Z 2 Green's function expressed entirely in terms of gauge fields.Note that the order is slightly changed in the main text Eq.( 15) by considering the spin anticommutation property.
Such correlation function probes the confinement of particle pairs in the LGT interpretation.It decays as a power law in the deconfined phase h = 0 and decays exponentially in the confined phase, which is the case for any non-zero value of the field [55,56].On the other hand, the pairpair correlation function decays as a power-law function also in the confined phase, which means that the effective dimers behave as Luttinger liquid [55].

Figure 1 :
Figure1: Quantum state tomography (QST) by active learning.We show our active learning cycle: In the first stage a reference measurement basis is chosen which is most suitable for reconstructing the state (grey).Then, based on the already measured data, the active learner requests specific highly informative samples (blue).These are used to train a RBM representing the reconstructed quantum state.

Figure 2 :
Figure 2: Schematic representation of an RBM with visible nodes ⃗ x, hidden nodes ⃗ h, and weights ⃗ b, ⃗ c and W.
Samples from a reference measurement configuration C ref are needed to learn the wave function amplitudes from the measurement outcomes.

Figure 3 :
Figure 3: Calculating the level of disagreement between different wave function representations of different members of the committee (rows) as given by Eq. (6).The shown calculation is performed separately for each configuration x i .For simplicity, we show the procedure for real-valued wave functions and refer the reader to the text for the explanation of the full AL scheme that includes the states' phase structure.
to a lower value f 1/N stop = 80 % (except for the GHZ state with f 1/N stop = 90 % and the GHZ φ state with f 1/N stop = 85 %).The number of samples, queries and configurations at the end of the training is presented in Tab. 1.

2 KLFigure 6 :
Figure6: Exemplary learning curves for the GHZ state (see Eq. (7)) with 5 qubits generated on a classical quantum simulator by IBM.Both 1 − f 1/N (with f 1/N the fidelity) and the Kullback-Leibler divergence KL are reduced when using AL, compared to a reconstruction with the same number of measurements, but randomly chosen measurement configurations.

2 KLFigure 7 :
Figure7: Exemplary learning curves for the GHZ state with phase (see Eq. (8)) with 5 qubits generated on a quantum device of IBM.Both 1 − f 1/N (with f 1/N the fidelity) and the Kullback-Leibler divergence KL are reduced when using AL, compared to a reconstruction with the same number of measurements, but randomly chosen measurement configurations.

Figure 8 :
Figure 8: Reconstruction results for the ground states of the XXZ Hamiltonian (11).

Figure 9 :
Figure 9: Learning curve for the ground state of the XXZ Hamiltonian (11) with ∆ = −1.The values for the target states are represented by the black lines.

Figure 12 :
Figure 12: Relative difference of target and reconstructed densities (left) and correlators (right) from Eqs. (13) and (15) of the KCS model with h/t = 0. Error bars correspond to the standard error of the mean.

Figure 13 :
Figure 13: Relative difference of target and reconstructed densities (left) and correlators (right) from Eqs. (13) and (15) of the KCS model with h/t = 1 and µ/t = 1.An exemplary learning curve for 19 qubits is shown in the Appendix B.2.

Figure 15 :
Figure 15: Reconstruction results for a pure RBM reconstruction of quantum states prepared on an IBM device (5 qubits) without AL.The number of samples is varied by keeping the number of configurations fixed (6 configurations).

Figure 16 :Figure 17 :
Figure 16: Reconstruction results for a pure RBM reconstruction of real quantum states (5 qubits) without AL.The number of configurations is varied by keeping the number of samples fixed (2000 samples).

Figure 19 :
Figure19: Correlator over distance (top) and density (bottom) for the KCS state with h/t = 1, µ/t = 1 and 7 qubits for target state (black) and the reconstructed states using AL (blue) and the baseline (green).

Table 1 :
(10) to Number of samples N tot , number of queries N queries selected by the active learner and total number of configurations N conf at the end of the learning for the reconstruction of the quantum states generated on IBM's classical quantum simulators and quantum devices.Here, the number of samples is N per query = 1 except for the GHZ φ state on the quantum device, where N per query = 10.The reference configuration selected by the AL is zz ...z for z-spins, GHZ and GHZ φ states, and xx ...x for the x-spins state.All states are defined in Eqs.(7)to(10).

Table 2 :
Reference configuration selected by the AL, number of samples N tot , number of queries N queries , number of samples per query N per query and configurations N conf selected by the active learner for the reconstruction of the DMRG states: For the XXZ model and kinetically constrained spin (KCS) model ground states.
[64]or a finite data set D = (x 1 , x 2 , . . . ) it trains the RBM such that the Kullback-Leibler divergence (Eq.(2)) is minimized.The minimization is performed by gradient descent, which involves the calculation of expectation values with respect to the distributions of the data and the model.To calculate the expectation value over the model distribution one usually uses Markov Chain Monte Carlo sampling.Due to the restricted nature of RBMs hidden and visible units are conditional independent and hence the conditional probabilities factorize.Consequently, it is possible to calculate the conditional distributions of all visible / hidden nodes in parallel by taking h t+1 ∝ p(h|v t ) and v t+1 ∝ p(v|h t+1 ), where t measures the number of steps in the Monte Carlo chain (block Gibbs sampling).For large t → ∞ it is guaranteed to converge[64].A slight modification which simplifies the training process is called contrastive divergence.
[55,56]1: Correlator over distance for the KCS model state with h/t = 1, µ/t = 1 and 19 qubits for target state (black) and the reconstructed states using AL (blue) and the baseline (green) as in Fig.11but with logarithmic scaling for the y axis.C.1 Mapping to a Z 2 lattice gauge theoryThe model in Eq. (12) can be mapped to a onedimensional Z 2 lattice gauge theory model, with U(1) matter[55,56].To this end, domain walls in the spin model are mapped to hardcore bosons, which are coupled to Z 2 gauge fields defined on the links between the lattice sites i, j.As explained below, one obtains the following equivalent Hamiltonian,ĤZ 2 = − t i τ z ⟨i,j⟩ âj + h.c.