Get trending papers in your email inbox once a day!
Get trending papers in your email inbox!
SubscribeTwo-photon driven Kerr quantum oscillator with multiple spectral degeneracies
Kerr nonlinear oscillators driven by a two-photon process are promising systems to encode quantum information and to ensure a hardware-efficient scaling towards fault-tolerant quantum computation. In this paper, we show that an extra control parameter, the detuning of the two-photon drive with respect to the oscillator resonance, plays a crucial role in the properties of the defined qubit. At specific values of this detuning, we benefit from strong symmetries in the system, leading to multiple degeneracies in the spectrum of the effective confinement Hamiltonian. Overall, these degeneracies lead to a stronger suppression of bit-flip errors. We also study the combination of such Hamiltonian confinement with colored dissipation to suppress leakage outside of the bosonic code space. We show that the additional degeneracies allow us to perform fast and high-fidelity gates while preserving a strong suppression of bit-flip errors.
Combined Dissipative and Hamiltonian Confinement of Cat Qubits
Quantum error correction with biased-noise qubits can drastically reduce the hardware overhead for universal and fault-tolerant quantum computation. Cat qubits are a promising realization of biased-noise qubits as they feature an exponential error bias inherited from their non-local encoding in the phase space of a quantum harmonic oscillator. To confine the state of an oscillator to the cat qubit manifold, two main approaches have been considered so far: a Kerr-based Hamiltonian confinement with high gate performances, and a dissipative confinement with robust protection against a broad range of noise mechanisms. We introduce a new combined dissipative and Hamiltonian confinement scheme based on two-photon dissipation together with a Two-Photon Exchange (TPE) Hamiltonian. The TPE Hamiltonian is similar to Kerr nonlinearity, but unlike the Kerr it only induces a bounded distinction between even- and odd-photon eigenstates, a highly beneficial feature for protecting the cat qubits with dissipative mechanisms. Using this combined confinement scheme, we demonstrate fast and bias-preserving gates with drastically improved performance compared to dissipative or Hamiltonian schemes. In addition, this combined scheme can be implemented experimentally with only minor modifications of existing dissipative cat qubit experiments.
Optimal fidelity in implementing Grover's search algorithm on open quantum system
We investigate the fidelity of Grover's search algorithm by implementing it on an open quantum system. In particular, we study with what accuracy one can estimate that the algorithm would deliver the searched state. In reality, every system has some influence of its environment. We include the environmental effects on the system dynamics by using a recently reported fluctuation-regulated quantum master equation (FRQME). The FRQME indicates that in addition to the regular relaxation due to system-environment coupling, the applied drive also causes dissipation in the system dynamics. As a result, the fidelity is found to depend on both the drive-induced dissipative terms and the relaxation terms and we find that there exists a competition between them, leading to an optimum value of the drive amplitude for which the fidelity becomes maximum. For efficient implementation of the search algorithm, precise knowledge of this optimum drive amplitude is essential.
Impact of Static Disorder and Dephasing on Quantum Transport in LH1-RC Models
We numerically study excitation transfer in an artificial LH1-RC complex -- an N-site donor ring coupled to a central acceptor -- driven by a narrowband optical mode and evolved under a Lindblad master equation with loss and dephasing. In the absence of disorder, the light-driven system exhibits a tall, narrow on-resonance efficiency peak (near unity for our parameters); dephasing lowers and narrows this peak without shifting its position. Off resonance, the efficiency shows environmentally assisted transport with a clear non-monotonic dependence on dephasing and a finite optimum. Under static disorder, two regimes emerge: photon-ring coupling and diagonal energetic disorder mix the drive into dark ring modes, activate dissipative channels, and depress efficiency over a detuning window, whereas intra-ring coupling disorder has a much smaller impact in the tested range; increasing the intra-ring coupling g moves dark-mode crossings away from the operating detuning and restores near-peak performance. In the ordered, symmetric, single-excitation, narrowband limit we analytically derive closed-form transfer efficiencies by projecting onto the k{=}0 bright mode and solving the photon--bright mode--acceptor trimer via a Laplace/linear-algebra (determinant) formula; these expressions include a probability-conservation identity eta + sum_k L_k = 1 that benchmarks the simulations and quantitatively predicts the resonant line shape and its dephasing-induced narrowing. A minimal ring toy model further reproduces coherent trapping and its relief by moderate dephasing (ENAQT). These analytics are exact in the ordered limit and serve as mechanistic guides outside this limit, yielding practical design rules for robust, bio-inspired light-harvesting devices.
Anti-Hong-Ou-Mandel effect with entangled photons
In the classical Hong-Ou-Mandel (HOM) effect pairs of photons with bosonic (fermionic) spatial wavefunction coalesce (anti-coalesce) when mixed on a lossless beamsplitter. Here we report that the presence of dissipation in the beamsplitter allows the observation of the anti-HOM effect, where bosons anti-coalesce and fermions show coalescent-like behavior. We provide an experimental demonstration of the anti-HOM effect for both bosonic and fermionic two-photon entangled states. Beyond its fundamental significance, the anti-HOM effect offers applications in quantum information and metrology where states of entangled photons are dynamically converted.
Polariton Enhanced Free Charge Carrier Generation in Donor-Acceptor Cavity Systems by a Second-Hybridization Mechanism
Cavity quantum electrodynamics has been studied as a potential approach to modify free charge carrier generation in donor-acceptor heterojunctions because of the delocalization and controllable energy level properties of hybridized light-matter states known as polaritons. However, in many experimental systems, cavity coupling decreases charge separation. Here, we theoretically study the quantum dynamics of a coherent and dissipative donor-acceptor cavity system, to investigate the dynamical mechanism and further discover the conditions under which polaritons may enhance free charge carrier generation. We use open quantum system methods based on single-pulse pumping to find that polaritons have the potential to connect excitonic states and charge separated states, further enhancing free charge generation on an ultrafast timescale of several hundred femtoseconds. The mechanism involves that polaritons with proper energy levels allow the exciton to overcome the high Coulomb barrier induced by electron-hole attraction. Moreover, we propose that a second-hybridization between a polariton state and dark states with similar energy enables the formation of the hybrid charge separated states that are optically active. These two mechanisms lead to a maximum of 50% enhancement of free charge carrier generation on a short timescale. However, our simulation reveals that on the longer timescale of picoseconds, internal conversion and cavity loss dominate and suppress free charge carrier generation, reproducing the experimental results. Thus, our work shows that polaritons can affect the charge separation mechanism and promote free charge carrier generation efficiency, but predominantly on a short timescale after photoexcitation.
Simulation of integrated nonlinear quantum optics: from nonlinear interferometer to temporal walk-off compensator
Nonlinear quantum photonics serves as a cornerstone in photonic quantum technologies, such as universal quantum computing and quantum communications. The emergence of integrated photonics platform not only offers the advantage of large-scale manufacturing but also provides a variety of engineering methods. Given the complexity of integrated photonics engineering, a comprehensive simulation framework is essential to fully harness the potential of the platform. In this context, we introduce a nonlinear quantum photonics simulation framework which can accurately model a variety of features such as adiabatic waveguide, material anisotropy, linear optics components, photon losses, and detectors. Furthermore, utilizing the framework, we have developed a device scheme, chip-scale temporal walk-off compensation, that is useful for various quantum information processing tasks. Applying the simulation framework, we show that the proposed device scheme can enhance the squeezing parameter of photon-pair sources and the conversion efficiency of quantum frequency converters without relying on higher pump power.
The Unconventional Photon Blockade
We review the unconventional photon blockade mechanism. This quantum effect remarkably enables a strongly sub-Poissonian light statistics, even from a system characterized by a weak single photon nonlinearity. We revisit the past results, which can be interpreted in terms of quantum interferences or optimal squeezing, and show how recent developments on input-output field mixing can overcome the limitations of the original schemes towards passive and integrable single photon sources. We finally present some valuable alternative schemes for which the unconventional blockade can be directly adapted.
Driving Enhanced Exciton Transfer by Automatic Differentiation
We model and study the processes of excitation, absorption, and transfer in various networks. The model consists of a harmonic oscillator representing a single-mode radiation field, a qubit acting as an antenna, a network through which the excitation propagates, and a qubit at the end serving as a sink. We investigate how off-resonant excitations can be optimally absorbed and transmitted through the network. Three strategies are considered: optimising network energies, adjusting the couplings between the radiation field, the antenna, and the network, or introducing and optimising driving fields at the start and end of the network. These strategies are tested on three different types of network with increasing complexity: nearest-neighbour and star configurations, and one associated with the Fenna-Matthews-Olson complex. The results show that, among the various strategies, the introduction of driving fields is the most effective, leading to a significant increase in the probability of reaching the sink in a given time. This result remains stable across networks of varying dimensionalities and types, and the driving process requires only a few parameters to be effective.
Clustered Geometries Exploiting Quantum Coherence Effects for Efficient Energy Transfer in Light Harvesting
Elucidating quantum coherence effects and geometrical factors for efficient energy transfer in photosynthesis has the potential to uncover non-classical design principles for advanced organic materials. We study energy transfer in a linear light-harvesting model to reveal that dimerized geometries with strong electronic coherences within donor and acceptor pairs exhibit significantly improved efficiency, which is in marked contrast to predictions of the classical F\"orster theory. We reveal that energy tuning due to coherent delocalization of photoexcitations is mainly responsible for the efficiency optimization. This coherence-assisted energy-tuning mechanism also explains the energetics and chlorophyll arrangements in the widely-studied Fenna-Matthews-Olson complex. We argue that a clustered network with rapid energy relaxation among donors and resonant energy transfer from donor to acceptor states provides a basic formula for constructing efficient light-harvesting systems, and the general principles revealed here can be generalized to larger systems and benefit future innovation of efficient molecular light-harvesting materials.
A photonic cluster state machine gun
We present a method to convert certain single photon sources into devices capable of emitting large strings of photonic cluster state in a controlled and pulsed "on demand" manner. Such sources would greatly reduce the resources required to achieve linear optical quantum computation. Standard spin errors, such as dephasing, are shown to affect only 1 or 2 of the emitted photons at a time. This allows for the use of standard fault tolerance techniques, and shows that the photonic machine gun can be fired for arbitrarily long times. Using realistic parameters for current quantum dot sources, we conclude high entangled-photon emission rates are achievable, with Pauli-error rates per photon of less than 0.2%. For quantum dot sources the method has the added advantage of alleviating the problematic issues of obtaining identical photons from independent, non-identical quantum dots, and of exciton dephasing.
simple-idealized-1d-nlse: Pseudo-Spectral Solver for the 1D Nonlinear Schrödinger Equation
We present an open-source Python implementation of an idealized high-order pseudo-spectral solver for the one-dimensional nonlinear Schr\"odinger equation (NLSE). The solver combines Fourier spectral spatial discretization with an adaptive eighth-order Dormand-Prince time integration scheme to achieve machine-precision conservation of mass and near-perfect preservation of momentum and energy for smooth solutions. The implementation accurately reproduces fundamental NLSE phenomena including soliton collisions with analytically predicted phase shifts, Akhmediev breather dynamics, and the development of modulation instability from noisy initial conditions. Four canonical test cases validate the numerical scheme: single soliton propagation, two-soliton elastic collision, breather evolution, and noise-seeded modulation instability. The solver employs a 2/3 dealiasing rule with exponential filtering to prevent aliasing errors from the cubic nonlinearity. Statistical analysis using Shannon, R\'enyi, and Tsallis entropies quantifies the spatio-temporal complexity of solutions, while phase space representations reveal the underlying coherence structure. The implementation prioritizes code transparency and educational accessibility over computational performance, providing a valuable pedagogical tool for exploring nonlinear wave dynamics. Complete source code, documentation, and example configurations are freely available, enabling reproducible computational experiments across diverse physical contexts where the NLSE governs wave evolution, including nonlinear optics, Bose-Einstein condensates, and ocean surface waves.
Tutorial: Remote entanglement protocols for stationary qubits with photonic interfaces
Generating entanglement between distant quantum systems is at the core of quantum networking. In recent years, numerous theoretical protocols for remote entanglement generation have been proposed, of which many have been experimentally realized. Here, we provide a modular theoretical framework to elucidate the general mechanisms of photon-mediated entanglement generation between single spins in atomic or solid-state systems. Our framework categorizes existing protocols at various levels of abstraction and allows for combining the elements of different schemes in new ways. These abstraction layers make it possible to readily compare protocols for different quantum hardware. To enable the practical evaluation of protocols tailored to specific experimental parameters, we have devised numerical simulations based on the framework with our codes available online.
Tunable WS_2 Micro-Dome Open Cavity Single Photon Source
Versatile, tunable, and potentially scalable single-photon sources are a key asset in emergent photonic quantum technologies. In this work, a single-photon source based on WS_2 micro-domes, created via hydrogen ion irradiation, is realized and integrated into an open, tunable optical microcavity. Single-photon emission from the coupled emitter-cavity system is verified via the second-order correlation measurement, revealing a g^{(2)}(τ=0) value of 0.3. A detailed analysis of the spectrally selective, cavity enhanced emission features shows the impact of a pronounced acoustic phonon emission sideband, which contributes specifically to the non-resonant emitter-cavity coupling in this system. The achieved level of cavity-emitter control highlights the potential of open-cavity systems to tailor the emission properties of atomically thin quantum emitters, advancing their suitability for real-world quantum technology applications.
Two-photon interference: the Hong-Ou-Mandel effect
Nearly 30 years ago, two-photon interference was observed, marking the beginning of a new quantum era. Indeed, two-photon interference has no classical analogue, giving it a distinct advantage for a range of applications. The peculiarities of quantum physics may now be used to our advantage to outperform classical computations, securely communicate information, simulate highly complex physical systems and increase the sensitivity of precise measurements. This separation from classical to quantum physics has motivated physicists to study two-particle interference for both fermionic and bosonic quantum objects. So far, two-particle interference has been observed with massive particles, among others, such as electrons and atoms, in addition to plasmons, demonstrating the extent of this effect to larger and more complex quantum systems. A wide array of novel applications to this quantum effect is to be expected in the future. This review will thus cover the progress and applications of two-photon (two-particle) interference over the last three decades.
Path-Integral Approach to Quantum Acoustics
A path-integral approach to quantum acoustics is developed here. In contrast to the commonly utilized particle perspective, this emerging field brings forth a long neglected but essential wave paradigm for lattice vibrations. Within the coherent state picture, we formulate a non-Markovian, stochastic master equation that captures the exact dynamics of any system with coupling linear in the bath coordinates and nonlinear in the system coordinates. We further demonstrate the capability of the presented master equation by applying the corresponding procedure to the eminent Fr\"ohlich model. In general, we establish a solid foundation for quantum acoustics as a kindred framework to quantum optics, while paving the way for deeper first-principle explorations of non-perturbative system dynamics driven by lattice vibrations.
Generalized thermalization for integrable system under quantum quench
We investigate equilibration and generalized thermalization of the quantum Harmonic chain under local quantum quench. The quench action we consider is connecting two disjoint harmonic chains of different sizes and the system jumps between two integrable settings. We verify the validity of the Generalized Gibbs Ensemble description for this infinite dimensional Hilbert space system and also identify equilibration between the subsystems as in classical systems. Using Bogoliubov transformations, we show that the eigenstates of the system prior to the quench evolve towards the Gibbs Generalized Ensemble description. Eigenstates that are more delocalized (in the sense of inverse participation ratio) prior to the quench, tend to equilibrate more rapidly. Further, through the phase space properties of a Generalized Gibbs Ensemble and the strength of stimulated emission, we identify the necessary criterion on the initial states for such relaxation at late times and also find out the states which would potentially not be described by the Gibbs Generalized Ensemble description.
Proposal for room-temperature quantum repeaters with nitrogen-vacancy centers and optomechanics
We propose a quantum repeater architecture that can operate under ambient conditions. Our proposal builds on recent progress towards non-cryogenic spin-photon interfaces based on nitrogen-vacancy centers, which have excellent spin coherence times even at room temperature, and optomechanics, which allows to avoid phonon-related decoherence and also allows the emitted photons to be in the telecom band. We apply the photon number decomposition method to quantify the fidelity and the efficiency of entanglement established between two remote electron spins. We describe how the entanglement can be stored in nuclear spins and extended to long distances via quasi-deterministic entanglement swapping operations involving the electron and nuclear spins. We furthermore propose schemes to achieve high-fidelity readout of the spin states at room temperature using the spin-optomechanics interface. Our work shows that long-distance quantum networks made of solid-state components that operate at room temperature are within reach of current technological capabilities.
Programmable Heisenberg interactions between Floquet qubits
The fundamental trade-off between robustness and tunability is a central challenge in the pursuit of quantum simulation and fault-tolerant quantum computation. In particular, many emerging quantum architectures are designed to achieve high coherence at the expense of having fixed spectra and consequently limited types of controllable interactions. Here, by adiabatically transforming fixed-frequency superconducting circuits into modifiable Floquet qubits, we demonstrate an XXZ Heisenberg interaction with fully adjustable anisotropy. This interaction model is on one hand the basis for many-body quantum simulation of spin systems, and on the other hand the primitive for an expressive quantum gate set. To illustrate the robustness and versatility of our Floquet protocol, we tailor the Heisenberg Hamiltonian and implement two-qubit iSWAP, CZ, and SWAP gates with estimated fidelities of 99.32(3)%, 99.72(2)%, and 98.93(5)%, respectively. In addition, we implement a Heisenberg interaction between higher energy levels and employ it to construct a three-qubit CCZ gate with a fidelity of 96.18(5)%. Importantly, the protocol is applicable to various fixed-frequency high-coherence platforms, thereby unlocking a suite of essential interactions for high-performance quantum information processing. From a broader perspective, our work provides compelling avenues for future exploration of quantum electrodynamics and optimal control using the Floquet framework.
TempoRL: laser pulse temporal shape optimization with Deep Reinforcement Learning
High Power Laser's (HPL) optimal performance is essential for the success of a wide variety of experimental tasks related to light-matter interactions. Traditionally, HPL parameters are optimised in an automated fashion relying on black-box numerical methods. However, these can be demanding in terms of computational resources and usually disregard transient and complex dynamics. Model-free Deep Reinforcement Learning (DRL) offers a promising alternative framework for optimising HPL performance since it allows to tune the control parameters as a function of system states subject to nonlinear temporal dynamics without requiring an explicit dynamics model of those. Furthermore, DRL aims to find an optimal control policy rather than a static parameter configuration, particularly suitable for dynamic processes involving sequential decision-making. This is particularly relevant as laser systems are typically characterised by dynamic rather than static traits. Hence the need for a strategy to choose the control applied based on the current context instead of one single optimal control configuration. This paper investigates the potential of DRL in improving the efficiency and safety of HPL control systems. We apply this technique to optimise the temporal profile of laser pulses in the L1 pump laser hosted at the ELI Beamlines facility. We show how to adapt DRL to the setting of spectral phase control by solely tuning dispersion coefficients of the spectral phase and reaching pulses similar to transform limited with full-width at half-maximum (FWHM) of ca1.6 ps.
Solitons near avoided mode crossing in χ^{(2)} nanowaveguides
We present a model for chi^{(2)} waveguides accounting for three modes, two of which make an avoided crossing at the second harmonic wavelength. We introduce two linearly coupled pure modes and adjust the coupling to replicate the waveguide dispersion near the avoided crossing. Analysis of the nonlinear system reveals continuous wave (CW) solutions across much of the parameter-space and prevalence of its modulational instability. We also predict the existence of the avoided-crossing solitons, and study peculiarities of their dynamics and spectral properties, which include formation of a pedestal in the pulse tails and associated pronounced spectral peaks. Mapping these solitons onto the linear dispersion diagrams, we make connections between their existence and CW existence and stability. We also simulate the two-color soliton generation from a single frequency pump pulse to back up its formation and stability properties.
Coherent shuttle of electron-spin states
We demonstrate a coherent spin shuttle through a GaAs/AlGaAs quadruple-quantum-dot array. Starting with two electrons in a spin-singlet state in the first dot, we shuttle one electron over to either the second, third or fourth dot. We observe that the separated spin-singlet evolves periodically into the m=0 spin-triplet and back before it dephases due to nuclear spin noise. We attribute the time evolution to differences in the local Zeeman splitting between the respective dots. With the help of numerical simulations, we analyse and discuss the visibility of the singlet-triplet oscillations and connect it to the requirements for coherent spin shuttling in terms of the inter-dot tunnel coupling strength and rise time of the pulses. The distribution of entangled spin pairs through tunnel coupled structures may be of great utility for connecting distant qubit registers on a chip.
Generating arbitrary polarization states by manipulating the thicknesses of a pair of uniaxial birefringent plates
We report an optical method of generating arbitrary polarization states by manipulating the thicknesses of a pair of uniaxial birefringent plates, the optical axes of which are set at a crossing angle of {\pi}/4. The method has the remarkable feature of being able to generate a distribution of arbitrary polarization states in a group of highly discrete spectra without spatially separating the individual spectral components. The target polarization-state distribution is obtained as an optimal solution through an exploration. Within a realistic exploration range, a sufficient number of near-optimal solutions are found. This property is also reproduced well by a concise model based on a distribution of exploration points on a Poincar\'e sphere, showing that the number of near-optimal solutions behaves according to a power law with respect to the number of spectral components of concern. As a typical example of an application, by applying this method to a set of phase-locked highly discrete spectra, we numerically demonstrate the continuous generation of a vector-like optical electric field waveform, the helicity of which is alternated within a single optical cycle in the time domain.
Shaping Laser Pulses with Reinforcement Learning
High Power Laser (HPL) systems operate in the attoseconds regime -- the shortest timescale ever created by humanity. HPL systems are instrumental in high-energy physics, leveraging ultra-short impulse durations to yield extremely high intensities, which are essential for both practical applications and theoretical advancements in light-matter interactions. Traditionally, the parameters regulating HPL optical performance have been manually tuned by human experts, or optimized using black-box methods that can be computationally demanding. Critically, black box methods rely on stationarity assumptions overlooking complex dynamics in high-energy physics and day-to-day changes in real-world experimental settings, and thus need to be often restarted. Deep Reinforcement Learning (DRL) offers a promising alternative by enabling sequential decision making in non-static settings. This work explores the feasibility of applying DRL to HPL systems, extending the current research by (1) learning a control policy relying solely on non-destructive image observations obtained from readily available diagnostic devices, and (2) retaining performance when the underlying dynamics vary. We evaluate our method across various test dynamics, and observe that DRL effectively enables cross-domain adaptability, coping with dynamics' fluctuations while achieving 90\% of the target intensity in test environments.
Minimal evolution times for fast, pulse-based state preparation in silicon spin qubits
Standing as one of the most significant barriers to reaching quantum advantage, state-preparation fidelities on noisy intermediate-scale quantum processors suffer from quantum-gate errors, which accumulate over time. A potential remedy is pulse-based state preparation. We numerically investigate the minimal evolution times (METs) attainable by optimizing (microwave and exchange) pulses on silicon hardware. We investigate two state preparation tasks. First, we consider the preparation of molecular ground states and find the METs for H_2, HeH^+, and LiH to be 2.4 ns, 4.4 ns, and 27.2 ns, respectively. Second, we consider transitions between arbitrary states and find the METs for transitions between arbitrary four-qubit states to be below 50 ns. For comparison, connecting arbitrary two-qubit states via one- and two-qubit gates on the same silicon processor requires approximately 200 ns. This comparison indicates that pulse-based state preparation is likely to utilize the coherence times of silicon hardware more efficiently than gate-based state preparation. Finally, we quantify the effect of silicon device parameters on the MET. We show that increasing the maximal exchange amplitude from 10 MHz to 1 GHz accelerates the METs, e.g., for H_2 from 84.3 ns to 2.4 ns. This demonstrates the importance of fast exchange. We also show that increasing the maximal amplitude of the microwave drive from 884 kHz to 56.6 MHz shortens state transitions, e.g., for two-qubit states from 1000 ns to 25 ns. Our results bound both the state-preparation times for general quantum algorithms and the execution times of variational quantum algorithms with silicon spin qubits.
Efficient Quantum Algorithms for Quantum Optimal Control
In this paper, we present efficient quantum algorithms that are exponentially faster than classical algorithms for solving the quantum optimal control problem. This problem involves finding the control variable that maximizes a physical quantity at time T, where the system is governed by a time-dependent Schr\"odinger equation. This type of control problem also has an intricate relation with machine learning. Our algorithms are based on a time-dependent Hamiltonian simulation method and a fast gradient-estimation algorithm. We also provide a comprehensive error analysis to quantify the total error from various steps, such as the finite-dimensional representation of the control function, the discretization of the Schr\"odinger equation, the numerical quadrature, and optimization. Our quantum algorithms require fault-tolerant quantum computers.
Autoregressive Transformer Neural Network for Simulating Open Quantum Systems via a Probabilistic Formulation
The theory of open quantum systems lays the foundations for a substantial part of modern research in quantum science and engineering. Rooted in the dimensionality of their extended Hilbert spaces, the high computational complexity of simulating open quantum systems calls for the development of strategies to approximate their dynamics. In this paper, we present an approach for tackling open quantum system dynamics. Using an exact probabilistic formulation of quantum physics based on positive operator-valued measure (POVM), we compactly represent quantum states with autoregressive transformer neural networks; such networks bring significant algorithmic flexibility due to efficient exact sampling and tractable density. We further introduce the concept of String States to partially restore the symmetry of the autoregressive transformer neural network and improve the description of local correlations. Efficient algorithms have been developed to simulate the dynamics of the Liouvillian superoperator using a forward-backward trapezoid method and find the steady state via a variational formulation. Our approach is benchmarked on prototypical one and two-dimensional systems, finding results which closely track the exact solution and achieve higher accuracy than alternative approaches based on using Markov chain Monte Carlo to sample restricted Boltzmann machines. Our work provides general methods for understanding quantum dynamics in various contexts, as well as techniques for solving high-dimensional probabilistic differential equations in classical setups.
Nonequilibrium Phenomena in Driven and Active Coulomb Field Theories
The classical Coulomb gas model has served as one of the most versatile frameworks in statistical physics, connecting a vast range of phenomena across many different areas. Nonequilibrium generalisations of this model have so far been studied much more scarcely. With the abundance of contemporary research into active and driven systems, one would naturally expect that such generalisations of systems with long-ranged Coulomb-like interactions will form a fertile playground for interesting developments. Here, we present two examples of novel macroscopic behaviour that arise from nonequilibrium fluctuations in long-range interacting systems, namely (1) unscreened long-ranged correlations in strong electrolytes driven by an external electric field and the associated fluctuation-induced forces in the confined Casimir geometry, and (2) out-of-equilibrium critical behaviour in self-chemotactic models that incorporate the particle polarity in the chemotactic response of the cells. Both of these systems have nonlocal Coulomb-like interactions among their constituent particles, namely, the electrostatic interactions in the case of the driven electrolyte, and the chemotactic forces mediated by fast-diffusing signals in the case of self-chemotactic systems. The results presented here hint to the rich phenomenology of nonequilibrium effects that can arise from strong fluctuations in Coulomb interacting systems, and a rich variety of potential future directions, which are discussed.
PAH Emission Spectra and Band Ratios for Arbitrary Radiation Fields with the Single Photon Approximation
We present a new method for generating emission spectra from polycyclic aromatic hydrocarbons (PAHs) in arbitrary radiation fields. We utilize the single-photon limit for PAH heating and emission to treat individual photon absorptions as independent events. This allows the construction of a set of single-photon emission "basis spectra" that can be scaled to produce an output emission spectrum given any input heating spectrum. We find that this method produces agreement with PAH emission spectra computed accounting for multi-photon effects to within simeq10% in the 3-20~{rm mu m} wavelength range for radiation fields with intensity U<100. We use this framework to explore the dependence of PAH band ratios on the radiation field spectrum across grain sizes, finding in particular a strong dependence of the 3.3 to 11.2~mum band ratio on radiation field hardness. A Python-based tool and a set of basis spectra that can be used to generate these emission spectra are made publicly available.
Ground State Preparation via Dynamical Cooling
Quantum algorithms for probing ground-state properties of quantum systems require good initial states. Projection-based methods such as eigenvalue filtering rely on inputs that have a significant overlap with the low-energy subspace, which can be challenging for large, strongly-correlated systems. This issue has motivated the study of physically-inspired dynamical approaches such as thermodynamic cooling. In this work, we introduce a ground-state preparation algorithm based on the simulation of quantum dynamics. Our main insight is to transform the Hamiltonian by a shifted sign function via quantum signal processing, effectively mapping eigenvalues into positive and negative subspaces separated by a large gap. This automatically ensures that all states within each subspace conserve energy with respect to the transformed Hamiltonian. Subsequent time-evolution with a perturbed Hamiltonian induces transitions to lower-energy states while preventing unwanted jumps to higher energy states. The approach does not rely on a priori knowledge of energy gaps and requires no additional qubits to model a bath. Furthermore, it makes mathcal{O}(d^{,3/2}/epsilon) queries to the time-evolution operator of the system and mathcal{O}(d^{,3/2}) queries to a block-encoding of the perturbation, for d cooling steps and an epsilon-accurate energy resolution. Our results provide a framework for combining quantum signal processing and Hamiltonian simulation to design heuristic quantum algorithms for ground-state preparation.
The SWAP test and the Hong-Ou-Mandel effect are equivalent
We show that the Hong-Ou-Mandel effect from quantum optics is equivalent to the SWAP test, a quantum information primitive which compares two arbitrary states. We first derive a destructive SWAP test that doesn't need the ancillary qubit that appears in the usual quantum circuit. Then, we study the Hong-Ou-Mandel effect for two photons meeting at a beam splitter and prove it is, in fact, an optical implementation of the destructive SWAP test. This result offers both an interesting simple realization of a powerful quantum information primitive and an alternative way to understand and analyse the Hong-Ou-Mandel effect.
All photonic quantum repeaters
Quantum communication holds promise for unconditionally secure transmission of secret messages and faithful transfer of unknown quantum states. Photons appear to be the medium of choice for quantum communication. Owing to photon losses, robust quantum communication over long lossy channels requires quantum repeaters. It is widely believed that a necessary and highly demanding requirement for quantum repeaters is the existence of matter quantum memories at the repeater nodes. Here we show that such a requirement is, in fact, unnecessary by introducing the concept of all photonic quantum repeaters based on flying qubits. As an example of the realization of this concept, we present a protocol based on photonic cluster state machine guns and a loss-tolerant measurement equipped with local high-speed active feedforwards. We show that, with such an all photonic quantum repeater, the communication efficiency still scales polynomially with the channel distance. Our result paves a new route toward quantum repeaters with efficient single-photon sources rather than matter quantum memories.
Intensity statistics inside an open wave-chaotic cavity with broken time-reversal invariance
Using the supersymmetric method of random matrix theory within the Heidelberg approach framework we provide statistical description of stationary intensity sampled in locations inside an open wave-chaotic cavity, assuming that the time-reversal invariance inside the cavity is fully broken. In particular, we show that when incoming waves are fed via a finite number M of open channels the probability density {cal P}(I) for the single-point intensity I decays as a power law for large intensities: {cal P}(I)sim I^{-(M+2)}, provided there is no internal losses. This behaviour is in marked difference with the Rayleigh law {cal P}(I)sim exp(-I/I) which turns out to be valid only in the limit Mto infty. We also find the joint probability density of intensities I_1, ldots, I_L in L>1 observation points, and then extract the corresponding statistics for the maximal intensity in the observation pattern. For Lto infty the resulting limiting extreme value statistics (EVS) turns out to be different from the classical EVS distributions.
A unified diagrammatic approach to quantum transport in few-level junctions for bosonic and fermionic reservoirs: Application to the quantum Rabi model
We apply the Nakajima-Zwanzig approach to open quantum systems to study steady-state transport across generic multi-level junctions coupled to bosonic or fermionic reservoirs. The method allows for a unified diagrammatic formulation in Liouville space, with diagrams being classified according to an expansion in the coupling strength between the reservoirs and the junction. Analytical, approximate expressions are provided up to fourth order for the steady-state boson transport that generalize to multi-level systems the known results for the low-temperature thermal conductance in the spin-boson model. The formalism is applied to the problem of heat transport in a qubit-resonator junction modeled by the quantum Rabi model. Nontrivial transport features emerge as a result of the interplay between the qubit-oscillator detuning and coupling strength. For quasi-degenerate spectra, nonvanishing steady-state coherences cause a suppression of the thermal conductance.
Rearrangement of single atoms in a 2000-site optical tweezers array at cryogenic temperatures
We report on the trapping of single rubidium atoms in large arrays of optical tweezers comprising up to 2088 sites in a cryogenic environment at 6 K. Our approach relies on the use of microscope objectives that are in-vacuum but at room temperature, in combination with windowless thermal shields into which the objectives are protruding to ensure a cryogenic environment for the trapped atoms. To achieve enough optical power for efficient trapping, we combine two lasers at slightly different wavelengths. We discuss the performance and limitations of our design. Finally, we demonstrate atom-by-atom rearrangement of an 828-atom target array using moving optical tweezers controlled by a field-programmable gate array.
Turbulence modulation in liquid-liquid two-phase Taylor-Couette turbulence
We investigate the coupling effects of the two-phase interface, viscosity ratio, and density ratio of the dispersed phase to the continuous phase on the flow statistics in two-phase Taylor-Couette turbulence at a system Reynolds number of 6000 and a system Weber number of 10 using interface-resolved three-dimensional direct numerical simulations with the volume-of-fluid method. Our study focuses on four different scenarios: neutral droplets, low-viscosity droplets, light droplets, and low-viscosity light droplets. We find that neutral droplets and low-viscosity droplets primarily contribute to drag enhancement through the two-phase interface, while light droplets reduce the system's drag by explicitly reducing Reynolds stress due to the density dependence of Reynolds stress. Additionally, low-viscosity light droplets contribute to greater drag reduction by further reducing momentum transport near the inner cylinder and implicitly reducing Reynolds stress. While interfacial tension enhances turbulent kinetic energy (TKE) transport, drag enhancement is not strongly correlated with TKE transport for both neutral droplets and low-viscosity droplets. Light droplets primarily reduce the production term by diminishing Reynolds stress, whereas the density contrast between the phases boosts TKE transport near the inner wall. Therefore, the reduction in the dissipation rate is predominantly attributed to decreased turbulence production, causing drag reduction. For low-viscosity light droplets, the production term diminishes further, primarily due to their greater reduction in Reynolds stress, while reduced viscosity weakens the density difference's contribution to TKE transport near the inner cylinder, resulting in a more pronounced reduction in the dissipation rate and consequently stronger drag reduction. Our findings provide new insights into the turbulence modulation in two-phase flow.
Characterisation of three-body loss in {}^{166}Er and optimised production of large Bose-Einstein condensates
Ultracold gases of highly magnetic lanthanide atoms have enabled the realisation of dipolar quantum droplets and supersolids. However, future studies could be limited by the achievable atom numbers and hindered by high three-body loss rates. Here we study density-dependent atom loss in an ultracold gas of {}^{166}Er for magnetic fields below 4 G, identifying six previously unreported, strongly temperature-dependent features. We find that their positions and widths show a linear temperature dependence up to at least 15,muK. In addition, we observe a weak, polarisation-dependent shift of the loss features with the intensity of the light used to optically trap the atoms. This detailed knowledge of the loss landscape allows us to optimise the production of dipolar BECs with more than 2 times 10^5 atoms and points towards optimal strategies for the study of large-atom-number dipolar gases in the droplet and supersolid regimes.
Comparing coherent and incoherent models for quantum homogenization
Here we investigate the role of quantum interference in the quantum homogenizer, whose convergence properties model a thermalization process. In the original quantum homogenizer protocol, a system qubit converges to the state of identical reservoir qubits through partial-swap interactions, that allow interference between reservoir qubits. We design an alternative, incoherent quantum homogenizer, where each system-reservoir interaction is moderated by a control qubit using a controlled-swap interaction. We show that our incoherent homogenizer satisfies the essential conditions for homogenization, being able to transform a qubit from any state to any other state to arbitrary accuracy, with negligible impact on the reservoir qubits' states. Our results show that the convergence properties of homogenization machines that are important for modelling thermalization are not dependent on coherence between qubits in the homogenization protocol. We then derive bounds on the resources required to re-use the homogenizers for performing state transformations. This demonstrates that both homogenizers are universal for any number of homogenizations, for an increased resource cost.
Quantum control of a cat-qubit with bit-flip times exceeding ten seconds
Binary classical information is routinely encoded in the two metastable states of a dynamical system. Since these states may exhibit macroscopic lifetimes, the encoded information inherits a strong protection against bit-flips. A recent qubit - the cat-qubit - is encoded in the manifold of metastable states of a quantum dynamical system, thereby acquiring bit-flip protection. An outstanding challenge is to gain quantum control over such a system without breaking its protection. If this challenge is met, significant shortcuts in hardware overhead are forecast for quantum computing. In this experiment, we implement a cat-qubit with bit-flip times exceeding ten seconds. This is a four order of magnitude improvement over previous cat-qubit implementations, and six orders of magnitude enhancement over the single photon lifetime that compose this dynamical qubit. This was achieved by introducing a quantum tomography protocol that does not break bit-flip protection. We prepare and image quantum superposition states, and measure phase-flip times above 490 nanoseconds. Most importantly, we control the phase of these superpositions while maintaining the bit-flip time above ten seconds. This work demonstrates quantum operations that preserve macroscopic bit-flip times, a necessary step to scale these dynamical qubits into fully protected hardware-efficient architectures.
Metrological detection of multipartite entanglement through dynamical symmetries
Multipartite entanglement, characterized by the quantum Fisher information (QFI), plays a central role in quantum-enhanced metrology and understanding quantum many-body physics. With a dynamical generalization of the Mazur-Suzuki relations, we provide a rigorous lower bound on the QFI for the thermal Gibbs states in terms of dynamical symmetries, i.e., operators with periodic time dependence. We demonstrate that this bound can be saturated when considering a complete set of dynamical symmetries. Moreover, this lower bound with dynamical symmetries can be generalized to the QFI matrix and to the QFI for the thermal pure states, predicted by the eigenstate thermalization hypothesis. Our results reveal a new perspective to detect multipartite entanglement and other generalized variances in an equilibrium system, from its nonstationary dynamical properties, and is promising for studying emergent nonequilibrium many-body physics.
Deep learning probability flows and entropy production rates in active matter
Active matter systems, from self-propelled colloids to motile bacteria, are characterized by the conversion of free energy into useful work at the microscopic scale. These systems generically involve physics beyond the reach of equilibrium statistical mechanics, and a persistent challenge has been to understand the nature of their nonequilibrium states. The entropy production rate and the magnitude of the steady-state probability current provide quantitative ways to do so by measuring the breakdown of time-reversal symmetry and the strength of nonequilibrium transport of measure. Yet, their efficient computation has remained elusive, as they depend on the system's unknown and high-dimensional probability density. Here, building upon recent advances in generative modeling, we develop a deep learning framework that estimates the score of this density. We show that the score, together with the microscopic equations of motion, gives direct access to the entropy production rate, the probability current, and their decomposition into local contributions from individual particles, spatial regions, and degrees of freedom. To represent the score, we introduce a novel, spatially-local transformer-based network architecture that learns high-order interactions between particles while respecting their underlying permutation symmetry. We demonstrate the broad utility and scalability of the method by applying it to several high-dimensional systems of interacting active particles undergoing motility-induced phase separation (MIPS). We show that a single instance of our network trained on a system of 4096 particles at one packing fraction can generalize to other regions of the phase diagram, including systems with as many as 32768 particles. We use this observation to quantify the spatial structure of the departure from equilibrium in MIPS as a function of the number of particles and the packing fraction.
Approximate Quantum Compiling for Quantum Simulation: A Tensor Network based approach
We introduce AQCtensor, a novel algorithm to produce short-depth quantum circuits from Matrix Product States (MPS). Our approach is specifically tailored to the preparation of quantum states generated from the time evolution of quantum many-body Hamiltonians. This tailored approach has two clear advantages over previous algorithms that were designed to map a generic MPS to a quantum circuit. First, we optimize all parameters of a parametric circuit at once using Approximate Quantum Compiling (AQC) - this is to be contrasted with other approaches based on locally optimizing a subset of circuit parameters and "sweeping" across the system. We introduce an optimization scheme to avoid the so-called ``orthogonality catastrophe" - i.e. the fact that the fidelity of two arbitrary quantum states decays exponentially with the number of qubits - that would otherwise render a global optimization of the circuit impractical. Second, the depth of our parametric circuit is constant in the number of qubits for a fixed simulation time and fixed error tolerance. This is to be contrasted with the linear circuit Ansatz used in generic algorithms whose depth scales linearly in the number of qubits. For simulation problems on 100 qubits, we show that AQCtensor thus achieves at least an order of magnitude reduction in the depth of the resulting optimized circuit, as compared with the best generic MPS to quantum circuit algorithms. We demonstrate our approach on simulation problems on Heisenberg-like Hamiltonians on up to 100 qubits and find optimized quantum circuits that have significantly reduced depth as compared to standard Trotterized circuits.
The Virtual Quantum Optics Laboratory
We present a web-based software tool, the Virtual Quantum Optics Laboratory (VQOL), that may be used for designing and executing realistic simulations of quantum optics experiments. A graphical user interface allows one to rapidly build and configure a variety of different optical experiments, while the runtime environment provides unique capabilities for visualization and analysis. All standard linear optical components are available as well as sources of thermal, coherent, and entangled Gaussian states. A unique aspect of VQOL is the introduction of non-Gaussian measurements using detectors modeled as deterministic devices that "click" when the amplitude of the light falls above a given threshold. We describe the underlying theoretical models and provide several illustrative examples. We find that VQOL provides a a faithful representation of many experimental quantum optics phenomena and may serve as both a useful instructional tool for students as well as a valuable research tool for practitioners.
Optimizing quantum noise-induced reservoir computing for nonlinear and chaotic time series prediction
Quantum reservoir computing is strongly emerging for sequential and time series data prediction in quantum machine learning. We make advancements to the quantum noise-induced reservoir, in which reservoir noise is used as a resource to generate expressive, nonlinear signals that are efficiently learned with a single linear output layer. We address the need for quantum reservoir tuning with a novel and generally applicable approach to quantum circuit parameterization, in which tunable noise models are programmed to the quantum reservoir circuit to be fully controlled for effective optimization. Our systematic approach also involves reductions in quantum reservoir circuits in the number of qubits and entanglement scheme complexity. We show that with only a single noise model and small memory capacities, excellent simulation results were obtained on nonlinear benchmarks that include the Mackey-Glass system for 100 steps ahead in the challenging chaotic regime.
Energy-Consumption Advantage of Quantum Computation
Energy consumption in solving computational problems has been gaining growing attention as a part of the performance measures of computers. Quantum computation is known to offer advantages over classical computation in terms of various computational resources; however, its advantage in energy consumption has been challenging to analyze due to the lack of a theoretical foundation to relate the physical notion of energy and the computer-scientific notion of complexity for quantum computation with finite computational resources. To bridge this gap, we introduce a general framework for studying the energy consumption of quantum and classical computation based on a computational model that has been conventionally used for studying query complexity in computational complexity theory. With this framework, we derive an upper bound for the achievable energy consumption of quantum computation. We also develop techniques for proving a nonzero lower bound of energy consumption of classical computation based on the energy-conservation law and Landauer's principle. With these general bounds, we rigorously prove that quantum computation achieves an exponential energy-consumption advantage over classical computation for solving a specific computational problem, Simon's problem. Furthermore, we clarify how to demonstrate this energy-consumption advantage of quantum computation in an experimental setting. These results provide a fundamental framework and techniques to explore the physical meaning of quantum advantage in the query-complexity setting based on energy consumption, opening an alternative way to study the advantages of quantum computation.
Is quantum computing green? An estimate for an energy-efficiency quantum advantage
The quantum advantage threshold determines when a quantum processing unit (QPU) is more efficient with respect to classical computing hardware in terms of algorithmic complexity. The "green" quantum advantage threshold - based on a comparison of energetic efficiency between the two - is going to play a fundamental role in the comparison between quantum and classical hardware. Indeed, its characterization would enable better decisions on energy-saving strategies, e.g. for distributing the workload in hybrid quantum-classical algorithms. Here, we show that the green quantum advantage threshold crucially depends on (i) the quality of the experimental quantum gates and (ii) the entanglement generated in the QPU. Indeed, for NISQ hardware and algorithms requiring a moderate amount of entanglement, a classical tensor network emulation can be more energy-efficient at equal final state fidelity than quantum computation. We compute the green quantum advantage threshold for a few paradigmatic examples in terms of algorithms and hardware platforms, and identify algorithms with a power-law decay of singular values of bipartitions - with power-law exponent alpha lesssim 1 - as the green quantum advantage threshold in the near future.
Generic Two-Mode Gaussian States as Quantum Sensors
Gaussian quantum channels constitute a cornerstone of continuous-variable quantum information science, underpinning a wide array of protocols in quantum optics and quantum metrology. While the action of such channels on arbitrary states is well-characterized under full channel knowledge, we address the inverse problem, namely, the precise estimation of fundamental channel parameters, including the beam splitter transmissivity and the two-mode squeezing amplitude. Employing the quantum Fisher information (QFI) as a benchmark for metrological sensitivity, we demonstrate that the symmetry inherent in mode mixing critically governs the amplification of QFI, thereby enabling high-precision parameter estimation. In addition, we investigate quantum thermometry by estimating the average photon number of thermal states, revealing that the transmissivity parameter significantly modulates estimation precision. Our results underscore the metrological utility of two-mode Gaussian states and establish a robust framework for parameter inference in noisy and dynamically evolving quantum systems.
Large-scale optical characterization of solid-state quantum emitters
Solid-state quantum emitters have emerged as a leading quantum memory for quantum networking applications. However, standard optical characterization techniques are neither efficient nor repeatable at scale. In this work, we introduce and demonstrate spectroscopic techniques that enable large-scale, automated characterization of color centers. We first demonstrate the ability to track color centers by registering them to a fabricated machine-readable global coordinate system, enabling systematic comparison of the same color center sites over many experiments. We then implement resonant photoluminescence excitation in a widefield cryogenic microscope to parallelize resonant spectroscopy, achieving two orders of magnitude speed-up over confocal microscopy. Finally, we demonstrate automated chip-scale characterization of color centers and devices at room temperature, imaging thousands of microscope fields of view. These tools will enable accelerated identification of useful quantum emitters at chip-scale, enabling advances in scaling up color center platforms for quantum information applications, materials science, and device design and characterization.
Assembly and coherent control of a register of nuclear spin qubits
We introduce an optical tweezer platform for assembling and individually manipulating a two-dimensional register of nuclear spin qubits. Each nuclear spin qubit is encoded in the ground ^{1}S_{0} manifold of ^{87}Sr and is individually manipulated by site-selective addressing beams. We observe that spin relaxation is negligible after 5 seconds, indicating that T_1gg5 s. Furthermore, utilizing simultaneous manipulation of subsets of qubits, we demonstrate significant phase coherence over the entire register, estimating T_2^star = left(21pm7right) s and measuring T_2^echo=left(42pm6right) s.
A comparison between higher-order nonclassicalities of superposition engineered coherent and thermal states
We consider an experimentally obtainable SUP operator, defined by using a generalized superposition of products of field annihilation (a) and creation (a^dagger) operators of the type, A = saa^dagger+t{a^dagger}a with s^2+t^2=1. We apply this SUP operator on coherent and thermal quantum states, the states thus produced are referred as SUP-operated coherent state (SOCS) and SUP-operated thermal state (SOTS), respectively. In the present work, we report a comparative study between the higher-order nonclassical properties of SOCS and SOTS. The comparison is performed by using a set of nonclassicality witnesses (e.g., higher-order antiubunching, higher-order sub-Poissonian photon statistics, higher-order squeezing, Agarwal-Tara parameter, Klyshko's condition). The existence of higher-order nonclassicalities in SOCS and SOTS have been investigated for the first time. In view of possible experimental verification of the proposed scheme, we present exact calculations to reveal the effect of non-unit quantum efficiency of quantum detector on higher-order nonclassicalities.
EvidenceMoE: A Physics-Guided Mixture-of-Experts with Evidential Critics for Advancing Fluorescence Light Detection and Ranging in Scattering Media
Fluorescence LiDAR (FLiDAR), a Light Detection and Ranging (LiDAR) technology employed for distance and depth estimation across medical, automotive, and other fields, encounters significant computational challenges in scattering media. The complex nature of the acquired FLiDAR signal, particularly in such environments, makes isolating photon time-of-flight (related to target depth) and intrinsic fluorescence lifetime exceptionally difficult, thus limiting the effectiveness of current analytical and computational methodologies. To overcome this limitation, we present a Physics-Guided Mixture-of-Experts (MoE) framework tailored for specialized modeling of diverse temporal components. In contrast to the conventional MoE approaches our expert models are informed by underlying physics, such as the radiative transport equation governing photon propagation in scattering media. Central to our approach is EvidenceMoE, which integrates Evidence-Based Dirichlet Critics (EDCs). These critic models assess the reliability of each expert's output by providing per-expert quality scores and corrective feedback. A Decider Network then leverages this information to fuse expert predictions into a robust final estimate adaptively. We validate our method using realistically simulated Fluorescence LiDAR (FLiDAR) data for non-invasive cancer cell depth detection generated from photon transport models in tissue. Our framework demonstrates strong performance, achieving a normalized root mean squared error (NRMSE) of 0.030 for depth estimation and 0.074 for fluorescence lifetime.
Fusion-based quantum computation
We introduce fusion-based quantum computing (FBQC) - a model of universal quantum computation in which entangling measurements, called fusions, are performed on the qubits of small constant-sized entangled resource states. We introduce a stabilizer formalism for analyzing fault tolerance and computation in these schemes. This framework naturally captures the error structure that arises in certain physical systems for quantum computing, such as photonics. FBQC can offer significant architectural simplifications, enabling hardware made up of many identical modules, requiring an extremely low depth of operations on each physical qubit and reducing classical processing requirements. We present two pedagogical examples of fault-tolerant schemes constructed in this framework and numerically evaluate their threshold under a hardware agnostic fusion error model including both erasure and Pauli error. We also study an error model of linear optical quantum computing with probabilistic fusion and photon loss. In FBQC the non-determinism of fusion is directly dealt with by the quantum error correction protocol, along with other errors. We find that tailoring the fault-tolerance framework to the physical system allows the scheme to have a higher threshold than schemes reported in literature. We present a ballistic scheme which can tolerate a 10.4% probability of suffering photon loss in each fusion.
Blueprint for a Scalable Photonic Fault-Tolerant Quantum Computer
Photonics is the platform of choice to build a modular, easy-to-network quantum computer operating at room temperature. However, no concrete architecture has been presented so far that exploits both the advantages of qubits encoded into states of light and the modern tools for their generation. Here we propose such a design for a scalable and fault-tolerant photonic quantum computer informed by the latest developments in theory and technology. Central to our architecture is the generation and manipulation of three-dimensional hybrid resource states comprising both bosonic qubits and squeezed vacuum states. The proposal enables exploiting state-of-the-art procedures for the non-deterministic generation of bosonic qubits combined with the strengths of continuous-variable quantum computation, namely the implementation of Clifford gates using easy-to-generate squeezed states. Moreover, the architecture is based on two-dimensional integrated photonic chips used to produce a qubit cluster state in one temporal and two spatial dimensions. By reducing the experimental challenges as compared to existing architectures and by enabling room-temperature quantum computation, our design opens the door to scalable fabrication and operation, which may allow photonics to leap-frog other platforms on the path to a quantum computer with millions of qubits.
Ergotropy and Capacity Optimization in Heisenberg Spin Chain Quantum Batteries
This study examines the performance of finite spin quantum batteries (QBs) using Heisenberg spin models with Dzyaloshinsky-Moriya (DM) and Kaplan--Shekhtman--Entin-Wohlman--Aharony (KSEA) interactions. The QBs are modeled as interacting quantum spins in local inhomogeneous magnetic fields, inducing variable Zeeman splitting. We derive analytical expressions for the maximal extractable work, ergotropy and the capacity of QBs, as recently examined by Yang et al. [Phys. Rev. Lett. 131, 030402 (2023)]. These quantities are analytically linked through certain quantum correlations, as posited in the aforementioned study. Different Heisenberg spin chain models exhibit distinct behaviors under varying conditions, emphasizing the importance of model selection for optimizing QB performance. In antiferromagnetic (AFM) systems, maximum ergotropy occurs with a Zeeman splitting field applied to either spin, while ferromagnetic (FM) systems benefit from a uniform Zeeman field. Temperature significantly impacts QB performance, with ergotropy in the AFM case being generally more robust against temperature increases compared to the FM case. Incorporating DM and KSEA couplings can significantly enhance the capacity and ergotropy extraction of QBs. However, there exists a threshold beyond which additional increases in these interactions cause a sharp decline in capacity and ergotropy. This behavior is influenced by temperature and quantum coherence, which signal the occurrence of a sudden phase transition. The resource theory of quantum coherence proposed by Baumgratz et al. [Phys. Rev. Lett. 113, 140401 (2014)] plays a crucial role in enhancing ergotropy and capacity. However, ergotropy is limited by both the system's capacity and the amount of coherence. These findings support the theoretical framework of spin-based QBs and may benefit future research on quantum energy storage devices.
Calculation of prompt diphoton production cross sections at Tevatron and LHC energies
A fully differential calculation in perturbative quantum chromodynamics is presented for the production of massive photon pairs at hadron colliders. All next-to-leading order perturbative contributions from quark-antiquark, gluon-(anti)quark, and gluon-gluon subprocesses are included, as well as all-orders resummation of initial-state gluon radiation valid at next-to-next-to-leading logarithmic accuracy. The region of phase space is specified in which the calculation is most reliable. Good agreement is demonstrated with data from the Fermilab Tevatron, and predictions are made for more detailed tests with CDF and DO data. Predictions are shown for distributions of diphoton pairs produced at the energy of the Large Hadron Collider (LHC). Distributions of the diphoton pairs from the decay of a Higgs boson are contrasted with those produced from QCD processes at the LHC, showing that enhanced sensitivity to the signal can be obtained with judicious selection of events.
Entanglement Purification in Quantum Networks: Guaranteed Improvement and Optimal Time
While the concept of entanglement purification protocols (EPPs) is straightforward, the integration of EPPs in network architectures requires careful performance evaluations and optimizations that take into account realistic conditions and imperfections, especially probabilistic entanglement generation and quantum memory decoherence. It is important to understand what is guaranteed to be improved from successful EPP with arbitrary non-identical input, which determines whether we want to perform the EPP at all. When successful EPP can offer improvement, the time to perform the EPP should also be optimized to maximize the improvement. In this work, we study the guaranteed improvement and optimal time for the CNOT-based recurrence EPP, previously shown to be optimal in various scenarios. We firstly prove guaranteed improvement for multiple figures of merit, including fidelity and several entanglement measures when compared to practical baselines as functions of input states. However, it is noteworthy that the guaranteed improvement we prove does not imply the universality of the EPP as introduced in arXiv:2407.21760. Then we prove robust, parameter-independent optimal time for typical error models and figures of merit. We further explore memory decoherence described by continuous-time Pauli channels, and demonstrate the phenomenon of optimal time transition when the memory decoherence error pattern changes. Our work deepens the understanding of EPP performance in realistic scenarios and offers insights into optimizing quantum networks that integrate EPPs.
Label-efficient Single Photon Images Classification via Active Learning
Single-photon LiDAR achieves high-precision 3D imaging in extreme environments through quantum-level photon detection technology. Current research primarily focuses on reconstructing 3D scenes from sparse photon events, whereas the semantic interpretation of single-photon images remains underexplored, due to high annotation costs and inefficient labeling strategies. This paper presents the first active learning framework for single-photon image classification. The core contribution is an imaging condition-aware sampling strategy that integrates synthetic augmentation to model variability across imaging conditions. By identifying samples where the model is both uncertain and sensitive to these conditions, the proposed method selectively annotates only the most informative examples. Experiments on both synthetic and real-world datasets show that our approach outperforms all baselines and achieves high classification accuracy with significantly fewer labeled samples. Specifically, our approach achieves 97% accuracy on synthetic single-photon data using only 1.5% labeled samples. On real-world data, we maintain 90.63% accuracy with just 8% labeled samples, which is 4.51% higher than the best-performing baseline. This illustrates that active learning enables the same level of classification performance on single-photon images as on classical images, opening doors to large-scale integration of single-photon data in real-world applications.
A Compact Dual-Beam Zeeman Slower for High-Flux Cold Atoms
We present a compact design of dual-beam Zeeman slower optimized for efficient production of cold atom applications. Traditional single-beam configurations face challenges from substantial residual atomic flux impacting downstream optical windows, resulting in increased system size, atomic deposition contamination, and a reduced operational lifetime. Our approach employs two oblique laser beams and a capillary-array collimation system to address these challenges while maintaining efficient deceleration. For rubidium (^{87}Rb), simulations demonstrate a significant increase in the fraction of atoms captured by a two-dimensional magneto-optical trap (2D-MOT) and nearly eliminate atom-induced contamination probability at optical windows, all within a compact Zeeman slower length of 44 cm. Experimental validation with Rb and Yb demonstrates highly efficient atomic loading within the same compact design. This advancement represents a substantial improvement for high-flux cold atom applications, providing reliable performance for high-precision metrology, quantum computation and simulation.
Quantum coherence and distribution of N-partite bosonic fields in noninertial frame
We study the quantum coherence and its distribution of N-partite GHZ and W states of bosonic fields in the noninertial frames with arbitrary number of acceleration observers. We find that the coherence of both GHZ and W state reduces with accelerations and freezes in the limit of infinite accelerations. The freezing value of coherence depends on the number of accelerated observers. The coherence of N-partite GHZ state is genuinely global and no coherence exists in any subsystems. For the N-partite W state, however, the coherence is essentially bipartite types, and the total coherence is equal to the sum of coherence of all the bipartite subsystems.
Efficient parametric frequency conversions in lithium niobate nanophotonic chips
Chip-integrated nonlinear photonics holds the key for advanced optical information processing with superior performance and novel functionalities. Here, we present an optimally mode-matched, periodically poled lithium niobate nanowaveguide for efficient parametric frequency conversions on chip. Using a 4-mm nanowaveguide with subwavelength mode confinement, we demonstrate second harmonic generation with efficiency over 2200%~W^{-1}cm^{-2}, and broadband difference frequency generation with similar efficiency over a 4.5-THz spectral span. These allow us to generate correlated photon pairs over multiple frequency channels via spontaneous parametric down conversion, all in their fundamental spatial modes, with a coincidence to accidental ratio as high as 600. The high efficiency and dense integrability of the present chip devices may pave a viable route to scalable nonlinear applications in both classical and quantum domains.
Information Theory and Statistical Mechanics Revisited
The statistical mechanics of Gibbs is a juxtaposition of subjective, probabilistic ideas on the one hand and objective, mechanical ideas on the other. In this paper, we follow the path set out by Jaynes, including elements added subsequently to that original work, to explore the consequences of the purely statistical point of view. We show how standard methods in the equilibrium theory could have been derived simply from a description of the available problem information. In addition, our presentation leads to novel insights into questions associated with symmetry and non-equilibrium statistical mechanics. Two surprising consequences to be explored in further work are that (in)distinguishability factors are automatically predicted from the problem formulation and that a quantity related to the thermodynamic entropy production is found by considering information loss in non-equilibrium processes. Using the problem of ion channel thermodynamics as an example, we illustrate the idea of building up complexity by successively adding information to create progressively more complex descriptions of a physical system. Our result is that such statistical mechanical descriptions can be used to create transparent, computable, experimentally-relevant models that may be informed by more detailed atomistic simulations. We also derive a theory for the kinetic behavior of this system, identifying the nonequilibrium `process' free energy functional. The Gibbs relation for this functional is a fluctuation-dissipation theorem applicable arbitrarily far from equilibrium, that captures the effect of non-local and time-dependent behavior from transient driving forces. Based on this work, it is clear that statistical mechanics is a general tool for constructing the relationships between constraints on system information.
Quantum circuit synthesis with diffusion models
Quantum computing has recently emerged as a transformative technology. Yet, its promised advantages rely on efficiently translating quantum operations into viable physical realizations. In this work, we use generative machine learning models, specifically denoising diffusion models (DMs), to facilitate this transformation. Leveraging text-conditioning, we steer the model to produce desired quantum operations within gate-based quantum circuits. Notably, DMs allow to sidestep during training the exponential overhead inherent in the classical simulation of quantum dynamics -- a consistent bottleneck in preceding ML techniques. We demonstrate the model's capabilities across two tasks: entanglement generation and unitary compilation. The model excels at generating new circuits and supports typical DM extensions such as masking and editing to, for instance, align the circuit generation to the constraints of the targeted quantum device. Given their flexibility and generalization abilities, we envision DMs as pivotal in quantum circuit synthesis, enhancing both practical applications but also insights into theoretical quantum computation.
Efficient Self-Consistent Quantum Comb Tomography on the Product Stiefel Manifold
Characterizing non-Markovian quantum dynamics is currently hindered by the self-inconsistency and high computational complexity of existing quantum comb tomography (QCT) methods. In this work, we propose a self-consistent framework that unifies the quantum comb, instrument set, and initial states into a single geometric entity, termed as the Comb-Instrument-State (CIS) set. We demonstrate that the CIS set naturally resides on a product Stiefel manifold, allowing the tomography problem to be solved via efficient unconstrained Riemannian optimization while automatically preserving physical constraints. Numerical simulations confirm that our approach is computationally scalable and robust against gate definition errors, significantly outperforming conventional isometry-based QCT methods. Our work indicates the potential to efficiently learn quantum comb with fewer computational resources.
Three-level Dicke quantum battery
Quantum battery (QB) is the energy storage and extraction device that is governed by the principles of quantum mechanics. Here we propose a three-level Dicke QB and investigate its charging process by considering three quantum optical states: a Fock state, a coherent state, and a squeezed state. The performance of the QB in a coherent state is substantially improved compared to a Fock and squeezed states. We find that the locked energy is positively related to the entanglement between the charger and the battery, and diminishing the entanglement leads to the enhancement of the ergotropy. We demonstrate the QB system is asymptotically free as N rightarrow infty. The stored energy becomes fully extractable when N=10, and the charging power follows the consistent behavior as the stored energy, independent of the initial state of the charger.
Quantum Generative Diffusion Model
This paper introduces the Quantum Generative Diffusion Model (QGDM), a fully quantum-mechanical model for generating quantum state ensembles, inspired by Denoising Diffusion Probabilistic Models. QGDM features a diffusion process that introduces timestep-dependent noise into quantum states, paired with a denoising mechanism trained to reverse this contamination. This model efficiently evolves a completely mixed state into a target quantum state post-training. Our comparative analysis with Quantum Generative Adversarial Networks demonstrates QGDM's superiority, with fidelity metrics exceeding 0.99 in numerical simulations involving up to 4 qubits. Additionally, we present a Resource-Efficient version of QGDM (RE-QGDM), which minimizes the need for auxiliary qubits while maintaining impressive generative capabilities for tasks involving up to 8 qubits. These results showcase the proposed models' potential for tackling challenging quantum generation problems.
Explicit gate construction of block-encoding for Hamiltonians needed for simulating partial differential equations
Quantum computation is an emerging technology with important potential for solving certain problems pivotal in various scientific and engineering disciplines. This paper introduces an efficient quantum protocol for the explicit construction of the block-encoding for an important class of Hamiltonians. Using the Schrodingerisation technique -- which converts non-conservative PDEs into conservative ones -- this particular class of Hamiltonians is shown to be sufficient for simulating any linear partial differential equations that have coefficients which are polynomial functions. The class of Hamiltonians consist of discretisations of polynomial products and sums of position and momentum operators. This construction is explicit and leverages minimal one- and two-qubit operations. The explicit construction of this block-encoding forms a fundamental building block for constructing the unitary evolution operator for this Hamiltonian. The proposed algorithm exhibits polynomial scaling with respect to the spatial partitioning size, suggesting an exponential speedup over classical finite-difference methods. This work provides an important foundation for building explicit and efficient quantum circuits for solving partial differential equations.
Non-equilibrium correlation dynamics in the one-dimensional Fermi-Hubbard model: A testbed for the two-particle reduced density matrix theory
We explore the non-equilibrium dynamics of a one-dimensional Fermi-Hubbard system as a sensitive testbed for the capabilities of the time-dependent two-particle reduced density matrix (TD2RDM) theory to accurately describe time-dependent correlated systems. We follow the time evolution of the out-of-equilibrium finite-size Fermi-Hubbard model initialized by a quench over extended periods of time. By comparison with exact calculations for small systems and with matrix product state (MPS) calculations for larger systems but limited to short times, we demonstrate that the TD2RDM theory can accurately account for the non-equilibrium dynamics in the regime from weak to moderately strong inter-particle correlations. We find that the quality of the approximate reconstruction of the three-particle cumulant (or correlation) required for the closure of the equations of motion for the reduced density matrix is key to the accuracy of the numerical TD2RDM results. We identify the size of the dynamically induced three-particle correlations and the amplitude of cross correlations between the two- and three-particle cumulants as critical parameters that control the accuracy of the TD2RDM theory when current state-of-the art reconstruction functionals are employed.
Algorithms for the Markov Entropy Decomposition
The Markov entropy decomposition (MED) is a recently-proposed, cluster-based simulation method for finite temperature quantum systems with arbitrary geometry. In this paper, we detail numerical algorithms for performing the required steps of the MED, principally solving a minimization problem with a preconditioned Newton's algorithm, as well as how to extract global susceptibilities and thermal responses. We demonstrate the power of the method with the spin-1/2 XXZ model on the 2D square lattice, including the extraction of critical points and details of each phase. Although the method shares some qualitative similarities with exact-diagonalization, we show the MED is both more accurate and significantly more flexible.
Flying with Photons: Rendering Novel Views of Propagating Light
We present an imaging and neural rendering technique that seeks to synthesize videos of light propagating through a scene from novel, moving camera viewpoints. Our approach relies on a new ultrafast imaging setup to capture a first-of-its kind, multi-viewpoint video dataset with picosecond-level temporal resolution. Combined with this dataset, we introduce an efficient neural volume rendering framework based on the transient field. This field is defined as a mapping from a 3D point and 2D direction to a high-dimensional, discrete-time signal that represents time-varying radiance at ultrafast timescales. Rendering with transient fields naturally accounts for effects due to the finite speed of light, including viewpoint-dependent appearance changes caused by light propagation delays to the camera. We render a range of complex effects, including scattering, specular reflection, refraction, and diffraction. Additionally, we demonstrate removing viewpoint-dependent propagation delays using a time warping procedure, rendering of relativistic effects, and video synthesis of direct and global components of light transport.
Thermodynamic Performance Limits for Score-Based Diffusion Models
We establish a fundamental connection between score-based diffusion models and non-equilibrium thermodynamics by deriving performance limits based on entropy rates. Our main theoretical contribution is a lower bound on the negative log-likelihood of the data that relates model performance to entropy rates of diffusion processes. We numerically validate this bound on a synthetic dataset and investigate its tightness. By building a bridge to entropy rates - system, intrinsic, and exchange entropy - we provide new insights into the thermodynamic operation of these models, drawing parallels to Maxwell's demon and implications for thermodynamic computing hardware. Our framework connects generative modeling performance to fundamental physical principles through stochastic thermodynamics.
Pauli Propagation: A Computational Framework for Simulating Quantum Systems
Classical methods to simulate quantum systems are not only a key element of the physicist's toolkit for studying many-body models but are also increasingly important for verifying and challenging upcoming quantum computers. Pauli propagation has recently emerged as a promising new family of classical algorithms for simulating digital quantum systems. Here we provide a comprehensive account of Pauli propagation, tracing its algorithmic structure from its bit-level implementation and formulation as a tree-search problem, all the way to its high-level user applications for simulating quantum circuits and dynamics. Utilising these observations, we present PauliPropagation.jl, a Julia software package that can perform rapid Pauli propagation simulation straight out-of-the-box and can be used more generally as a building block for novel simulation algorithms.
Spacetime Neural Network for High Dimensional Quantum Dynamics
We develop a spacetime neural network method with second order optimization for solving quantum dynamics from the high dimensional Schr\"{o}dinger equation. In contrast to the standard iterative first order optimization and the time-dependent variational principle, our approach utilizes the implicit mid-point method and generates the solution for all spatial and temporal values simultaneously after optimization. We demonstrate the method in the Schr\"{o}dinger equation with a self-normalized autoregressive spacetime neural network construction. Future explorations for solving different high dimensional differential equations are discussed.
Quantum simulation of generic spin exchange models in Floquet-engineered Rydberg atom arrays
Although quantum simulation can give insight into elusive or intractable physical phenomena, many quantum simulators are unavoidably limited in the models they mimic. Such is also the case for atom arrays interacting via Rydberg states - a platform potentially capable of simulating any kind of spin exchange model, albeit with currently unattainable experimental capabilities. Here, we propose a new route towards simulating generic spin exchange Hamiltonians in atom arrays, using Floquet engineering with both global and local control. To demonstrate the versatility and applicability of our approach, we numerically investigate the generation of several spin exchange models which have yet to be realized in atom arrays, using only previously-demonstrated experimental capabilities. Our proposed scheme can be readily explored in many existing setups, providing a path to investigate a large class of exotic quantum spin models.
Enhanced Spectral Density of a Single Germanium Vacancy Center in a Nanodiamond by Cavity-Integration
Color centers in diamond, among them the negatively-charged germanium vacancy (GeV^-), are promising candidates for many applications of quantum optics such as a quantum network. For efficient implementation, the optical transitions need to be coupled to a single optical mode. Here, we demonstrate the transfer of a nanodiamond containing a single ingrown GeV- center with excellent optical properties to an open Fabry-P\'erot microcavity by nanomanipulation utilizing an atomic force microscope. Coupling of the GeV- defect to the cavity mode is achieved, while the optical resonator maintains a high finesse of F = 7,700 and a 48-fold spectral density enhancement is observed. This article demonstrates the integration of a GeV- defect with a Fabry-P\'erot microcavity under ambient conditions with the potential to extend the experiments to cryogenic temperatures towards an efficient spin-photon platform.
Single-shot Quantum Signal Processing Interferometry
Quantum systems of infinite dimension, such as bosonic oscillators, provide vast resources for quantum sensing. Yet, a general theory on how to manipulate such bosonic modes for sensing beyond parameter estimation is unknown. We present a general algorithmic framework, quantum signal processing interferometry (QSPI), for quantum sensing at the fundamental limits of quantum mechanics by generalizing Ramsey-type interferometry. Our QSPI sensing protocol relies on performing nonlinear polynomial transformations on the oscillator's quadrature operators by generalizing quantum signal processing (QSP) from qubits to hybrid qubit-oscillator systems. We use our QSPI sensing framework to make efficient binary decisions on a displacement channel in the single-shot limit. Theoretical analysis suggests the sensing accuracy, given a single-shot qubit measurement, scales inversely with the sensing time or circuit depth of the algorithm. We further concatenate a series of such binary decisions to perform parameter estimation in a bit-by-bit fashion. Numerical simulations are performed to support these statements. Our QSPI protocol offers a unified framework for quantum sensing using continuous-variable bosonic systems beyond parameter estimation and establishes a promising avenue toward efficient and scalable quantum control and quantum sensing schemes beyond the NISQ era.
Quantum thermophoresis
Thermophoresis is the migration of a particle due to a thermal gradient. Here, we theoretically uncover the quantum version of thermophoresis. As a proof of principle, we analytically find a thermophoretic force on a trapped quantum particle having three energy levels in Lambda configuration. We then consider a model of N sites, each coupled to its first neighbors and subjected to a local bath at a certain temperature, so as to show numerically how quantum thermophoresis behaves with increasing delocalization of the quantum particle. We discuss how negative thermophoresis and the Dufour effect appear in the quantum regime.
Does provable absence of barren plateaus imply classical simulability? Or, why we need to rethink variational quantum computing
A large amount of effort has recently been put into understanding the barren plateau phenomenon. In this perspective article, we face the increasingly loud elephant in the room and ask a question that has been hinted at by many but not explicitly addressed: Can the structure that allows one to avoid barren plateaus also be leveraged to efficiently simulate the loss classically? We present strong evidence that commonly used models with provable absence of barren plateaus are also classically simulable, provided that one can collect some classical data from quantum devices during an initial data acquisition phase. This follows from the observation that barren plateaus result from a curse of dimensionality, and that current approaches for solving them end up encoding the problem into some small, classically simulable, subspaces. Thus, while stressing quantum computers can be essential for collecting data, our analysis sheds serious doubt on the non-classicality of the information processing capabilities of parametrized quantum circuits for barren plateau-free landscapes. We end by discussing caveats in our arguments, the role of smart initializations and the possibility of provably superpolynomial, or simply practical, advantages from running parametrized quantum circuits.
Optimizing quantum phase estimation for the simulation of Hamiltonian eigenstates
We revisit quantum phase estimation algorithms for the purpose of obtaining the energy levels of many-body Hamiltonians and pay particular attention to the statistical analysis of their outputs. We introduce the mean phase direction of the parent distribution associated with eigenstate inputs as a new post-processing tool. By connecting it with the unknown phase, we find that if used as its direct estimator, it exceeds the accuracy of the standard majority rule using one less bit of resolution, making evident that it can also be inverted to provide unbiased estimation. Moreover, we show how to directly use this quantity to accurately find the energy levels when the initialized state is an eigenstate of the simulated propagator during the whole time evolution, which allows for shallower algorithms. We then use IBM Q hardware to carry out the digital quantum simulation of three toy models: a two-level system, a two-spin Ising model and a two-site Hubbard model at half-filling. Methodologies are provided to implement Trotterization and reduce the variability of results in noisy intermediate scale quantum computers.
Synthesis of discrete-continuous quantum circuits with multimodal diffusion models
Efficiently compiling quantum operations remains a major bottleneck in scaling quantum computing. Today's state-of-the-art methods achieve low compilation error by combining search algorithms with gradient-based parameter optimization, but they incur long runtimes and require multiple calls to quantum hardware or expensive classical simulations, making their scaling prohibitive. Recently, machine-learning models have emerged as an alternative, though they are currently restricted to discrete gate sets. Here, we introduce a multimodal denoising diffusion model that simultaneously generates a circuit's structure and its continuous parameters for compiling a target unitary. It leverages two independent diffusion processes, one for discrete gate selection and one for parameter prediction. We benchmark the model over different experiments, analyzing the method's accuracy across varying qubit counts, circuit depths, and proportions of parameterized gates. Finally, by exploiting its rapid circuit generation, we create large datasets of circuits for particular operations and use these to extract valuable heuristics that can help us discover new insights into quantum circuit synthesis.
PauliComposer: Compute Tensor Products of Pauli Matrices Efficiently
We introduce a simple algorithm that efficiently computes tensor products of Pauli matrices. This is done by tailoring the calculations to this specific case, which allows to avoid unnecessary calculations. The strength of this strategy is benchmarked against state-of-the-art techniques, showing a remarkable acceleration. As a side product, we provide an optimized method for one key calculus in quantum simulations: the Pauli basis decomposition of Hamiltonians.
An efficient Asymptotic-Preserving scheme for the Boltzmann mixture with disparate mass
In this paper, we develop and implement an efficient asymptotic-preserving (AP) scheme to solve the gas mixture of Boltzmann equations under the disparate mass scaling relevant to the so-called "epochal relaxation" phenomenon. The disparity in molecular masses, ranging across several orders of magnitude, leads to significant challenges in both the evaluation of collision operators and the designing of time-stepping schemes to capture the multi-scale nature of the dynamics. A direct implementation of the spectral method faces prohibitive computational costs as the mass ratio increases due to the need to resolve vastly different thermal velocities. Unlike [I. M. Gamba, S. Jin, and L. Liu, Commun. Math. Sci., 17 (2019), pp. 1257-1289], we propose an alternative approach based on proper truncation of asymptotic expansions of the collision operators, which significantly reduces the computational complexity and works well for small varepsilon. By incorporating the separation of three time scales in the model's relaxation process [P. Degond and B. Lucquin-Desreux, Math. Models Methods Appl. Sci., 6 (1996), pp. 405-436], we design an AP scheme that captures the specific dynamics of the disparate mass model while maintaining computational efficiency. Numerical experiments demonstrate the effectiveness of the proposed scheme in handling large mass ratios of heavy and light species, as well as capturing the epochal relaxation phenomenon.
Quantum Thermalization via Travelling Waves
Isolated quantum many-body systems which thermalize under their own dynamics are expected to act as their own thermal baths, thereby bringing their local subsystems to thermal equilibrium. Here we show that the infinite-dimensional limit of a quantum lattice model, as described by Dynamical Mean-Field theory (DMFT), provides a natural framework to understand this self-consistent thermalization process. Using the Fermi-Hubbard model as working example, we demonstrate that the emergence of a self-consistent bath thermalising the system is characterized by a sharp thermalization front, moving balistically and separating the initial condition from the long-time thermal fixed point. We characterize the full DMFT dynamics through an effective temperature for which we derive a travelling-wave equation of the Fisher-Kolmogorov-Petrovsky-Piskunov (FKPP) type. This equation allows to predict the asymptotic shape of the front and its velocity, which match perfectly the full DMFT numerics. Our results provide a new angle to understand the onset of quantum thermalisation in closed isolated systems.
Multi-state quantum simulations via model-space quantum imaginary time evolution
We introduce the framework of model space into quantum imaginary time evolution (QITE) to enable stable estimation of ground and excited states using a quantum computer. Model-space QITE (MSQITE) propagates a model space to the exact one by retaining its orthogonality, and hence is able to describe multiple states simultaneously. The quantum Lanczos (QLanczos) algorithm is extended to MSQITE to accelerate the convergence. The present scheme is found to outperform both the standard QLanczos and the recently proposed folded-spectrum QITE in simulating excited states. Moreover, we demonstrate that spin contamination can be effectively removed by shifting the imaginary time propagator, and thus excited states with a particular spin quantum number are efficiently captured without falling into the different spin states that have lower energies. We also investigate how different levels of the unitary approximation employed in MSQITE can affect the results. The effectiveness of the algorithm over QITE is demonstrated by noise simulations for the H4 model system.
Neural Inverse Rendering from Propagating Light
We present the first system for physically based, neural inverse rendering from multi-viewpoint videos of propagating light. Our approach relies on a time-resolved extension of neural radiance caching -- a technique that accelerates inverse rendering by storing infinite-bounce radiance arriving at any point from any direction. The resulting model accurately accounts for direct and indirect light transport effects and, when applied to captured measurements from a flash lidar system, enables state-of-the-art 3D reconstruction in the presence of strong indirect light. Further, we demonstrate view synthesis of propagating light, automatic decomposition of captured measurements into direct and indirect components, as well as novel capabilities such as multi-view time-resolved relighting of captured scenes.
Detecting Fermi Surface Nesting Effect for Fermionic Dicke Transition by Trap Induced Localization
Recently, the statistical effect of fermionic superradiance is approved by series of experiments both in free space and in a cavity. The Pauli blocking effect can be visualized by a 1/2 scaling of Dicke transition critical pumping strength against particle number Nat for fermions in a trap. However, the Fermi surface nesting effect, which manifests the enhancement of superradiance by Fermi statistics is still very hard to be identified. Here we studied the influence of localized fermions on the trap edge when both pumping optical lattice and the trap are presented. We find due to localization, the statistical effect in superradiant transition is enhanced. Two new scalings of critical pumping strength are observed as 4/3, and 2/3 for mediate particle number, and the Pauli blocking scaling 1/3 (2d case) in large particle number limit is unaffected. Further, we find the 4/3 scaling is subject to a power law increasing with rising ratio between recoil energy and trap frequency in pumping laser direction. The divergence of this scaling of critical pumping strength against N_{rm at} in E_R/omega_xrightarrow+infty limit can be identified as the Fermi surface nesting effect. Thus we find a practical experimental scheme for visualizing the long-desired Fermi surface nesting effect with the help of trap induced localization in a two-dimensional Fermi gas in a cavity.
Sub-second spin and lifetime-limited optical coherences in ^{171}Yb^{3+}:CaWO_4
Optically addressable solid-state spins have been extensively studied for quantum technologies, offering unique advantages for quantum computing, communication, and sensing. Advancing these applications is generally limited by finding materials that simultaneously provide lifetime-limited optical and long spin coherences. Here, we introduce ^{171}Yb^{3+} ions doped into a CaWO_4 crystal. We perform high-resolution spectroscopy of the excited state, and demonstrate all-optical coherent control of the electron-nuclear spin ensemble. We find narrow inhomogeneous broadening of the optical transitions of 185 MHz and radiative-lifetime-limited coherence time up to 0.75 ms. Next to this, we measure a spin-transition ensemble line width of 5 kHz and electron-nuclear spin coherence time reaching 0.15 seconds at zero magnetic field between 50 mK and 1 K temperatures. These results demonstrate the potential of ^{171}Yb^{3+}:CaWO_4 as a low-noise platform for building quantum technologies with ensemble-based memories, microwave-to-optical transducers, and optically addressable single-ion spin qubits.
Stim: a fast stabilizer circuit simulator
This paper presents ``Stim", a fast simulator for quantum stabilizer circuits. The paper explains how Stim works and compares it to existing tools. With no foreknowledge, Stim can analyze a distance 100 surface code circuit (20 thousand qubits, 8 million gates, 1 million measurements) in 15 seconds and then begin sampling full circuit shots at a rate of 1 kHz. Stim uses a stabilizer tableau representation, similar to Aaronson and Gottesman's CHP simulator, but with three main improvements. First, Stim improves the asymptotic complexity of deterministic measurement from quadratic to linear by tracking the {\em inverse} of the circuit's stabilizer tableau. Second, Stim improves the constant factors of the algorithm by using a cache-friendly data layout and 256 bit wide SIMD instructions. Third, Stim only uses expensive stabilizer tableau simulation to create an initial reference sample. Further samples are collected in bulk by using that sample as a reference for batches of Pauli frames propagating through the circuit.
Precision measurement of the last bound states in H_2 and determination of the H + H scattering length
The binding energies of the five bound rotational levels J=0-4 in the highest vibrational level v=14 in the X^1Sigma_g^+ ground electronic state of H_2 were measured in a three-step ultraviolet-laser experiment. Two-photon UV-photolysis of H_2S produced population in these high-lying bound states, that were subsequently interrogated at high precision via Doppler-free spectroscopy of the F^1Sigma_g^+ - X^1Sigma_g^+ system. A third UV-laser was used for detection through auto-ionizing resonances. The experimentally determined binding energies were found to be in excellent agreement with calculations based on non-adiabatic perturbation theory, also including relativistic and quantum electrodynamical contributions. The s-wave scattering length of the H + H system is derived from the binding energy of the last bound J=0 level via a direct semi-empirical approach, yielding a value of a_s = 0.2724(5) a_0, in good agreement with a result from a previously followed theoretical approach. The subtle effect of the malpha^4 relativity contribution to a_s was found to be significant. In a similar manner a value for the p-wave scattering volume is determined via the J=1 binding energy yielding a_p = -134.0000(6) a_0^3. The binding energy of the last bound state in H_2, the (v=14, J=4) level, is determined at 0.023(4) cm^{-1}, in good agreement with calculation. The effect of the hyperfine substructure caused by the two hydrogen atoms at large internuclear separation, giving rise to three distinct dissociation limits, is discussed.
rd-spiral: An open-source Python library for learning 2D reaction-diffusion dynamics through pseudo-spectral method
We introduce rd-spiral, an open-source Python library for simulating 2D reaction-diffusion systems using pseudo-spectral methods. The framework combines FFT-based spatial discretization with adaptive Dormand-Prince time integration, achieving exponential convergence while maintaining pedagogical clarity. We analyze three dynamical regimes: stable spirals, spatiotemporal chaos, and pattern decay, revealing extreme non-Gaussian statistics (kurtosis >96) in stable states. Information-theoretic metrics show 10.7% reduction in activator-inhibitor coupling during turbulence versus 6.5% in stable regimes. The solver handles stiffness ratios >6:1 with features including automated equilibrium classification and checkpointing. Effect sizes (delta=0.37--0.78) distinguish regimes, with asymmetric field sensitivities to perturbations. By balancing computational rigor with educational transparency, rd-spiral bridges theoretical and practical nonlinear dynamics.
Evaluating noises of boson sampling with statistical benchmark methods
The lack of self-correcting codes hiders the development of boson sampling to be large-scale and robust. Therefore, it is important to know the noise levels in order to cautiously demonstrate the quantum computational advantage or realize certain tasks. Based on those statistical benchmark methods such as the correlators and the clouds, which are initially proposed to discriminate boson sampling and other mockups, we quantificationally evaluate noises of photon partial distinguishability and photon loss compensated by dark counts. This is feasible owing to the fact that the output distribution unbalances are suppressed by noises, which are actually results of multi-photon interferences. This is why the evaluation performance is better when high order correlators or corresponding clouds are employed. Our results indicate that the statistical benchmark methods can also work in the task of evaluating noises of boson sampling.
Experimental Estimation of Quantum State Properties from Classical Shadows
Full quantum tomography of high-dimensional quantum systems is experimentally infeasible due to the exponential scaling of the number of required measurements on the number of qubits in the system. However, several ideas were proposed recently for predicting the limited number of features for these states, or estimating the expectation values of operators, without the need for full state reconstruction. These ideas go under the general name of shadow tomography. Here we provide an experimental demonstration of property estimation based on classical shadows proposed in [H.-Y. Huang, R. Kueng, J. Preskill. Nat. Phys. https://doi.org/10.1038/s41567-020-0932-7 (2020)] and study its performance in the quantum optical experiment with high-dimensional spatial states of photons. We show on experimental data how this procedure outperforms conventional state reconstruction in fidelity estimation from a limited number of measurements.
Synergy Between Quantum Circuits and Tensor Networks: Short-cutting the Race to Practical Quantum Advantage
While recent breakthroughs have proven the ability of noisy intermediate-scale quantum (NISQ) devices to achieve quantum advantage in classically-intractable sampling tasks, the use of these devices for solving more practically relevant computational problems remains a challenge. Proposals for attaining practical quantum advantage typically involve parametrized quantum circuits (PQCs), whose parameters can be optimized to find solutions to diverse problems throughout quantum simulation and machine learning. However, training PQCs for real-world problems remains a significant practical challenge, largely due to the phenomenon of barren plateaus in the optimization landscapes of randomly-initialized quantum circuits. In this work, we introduce a scalable procedure for harnessing classical computing resources to provide pre-optimized initializations for PQCs, which we show significantly improves the trainability and performance of PQCs on a variety of problems. Given a specific optimization task, this method first utilizes tensor network (TN) simulations to identify a promising quantum state, which is then converted into gate parameters of a PQC by means of a high-performance decomposition procedure. We show that this learned initialization avoids barren plateaus, and effectively translates increases in classical resources to enhanced performance and speed in training quantum circuits. By demonstrating a means of boosting limited quantum resources using classical computers, our approach illustrates the promise of this synergy between quantum and quantum-inspired models in quantum computing, and opens up new avenues to harness the power of modern quantum hardware for realizing practical quantum advantage.
Compositional Analysis of Fragrance Accords Using Femtosecond Thermal Lens Spectroscopy
Femtosecond thermal lens spectroscopy (FTLS) is a powerful analytical tool, yet its application to complex, multi-component mixtures like fragrance accords remains limited. Here, we introduce and validate a unified metric, the Femtosecond Thermal Lens Integrated Magnitude (FTL-IM), to characterize such mixtures. The FTL-IM, derived from the integrated signal area, provides a direct, model-free measure of the total thermo-optical response, including critical convective effects. Applying the FTL-IM to complex six-component accords, we demonstrate its utility in predicting a mixture's thermal response from its composition through linear additivity with respect to component mole fractions. Our method quantifies the accords' behavior, revealing both the baseline contributions of components and the dominant, non-linear effects of highly-active species like Methyl Anthranilate. This consistency is validated across single-beam Z-scan, dual-beam Z-scan, and time-resolved FTLS measurements. The metric also demonstrates the necessity of single-beam measurements for interpreting dual-beam data. This work establishes a rapid, quantitative method for fragrance analysis, offering advantages for quality control by directly linking a mixture's bulk thermo-optical properties to its composition.
Modeling transport in weakly collisional plasmas using thermodynamic forcing
How momentum, energy, and magnetic fields are transported in the presence of macroscopic gradients is a fundamental question in plasma physics. Answering this question is especially challenging for weakly collisional, magnetized plasmas, where macroscopic gradients influence the plasma's microphysical structure. In this paper, we introduce thermodynamic forcing, a new method for systematically modeling how macroscopic gradients in magnetized or unmagnetized plasmas shape the distribution functions of constituent particles. In this method, we propose to apply an anomalous force to those particles inducing the anisotropy that would naturally emerge due to macroscopic gradients in weakly collisional plasmas. We implement thermodynamic forcing in particle-in-cell (TF-PIC) simulations using a modified Vay particle pusher and validate it against analytic solutions of the equations of motion. We then carry out a series of simulations of electron-proton plasmas with periodic boundary conditions using TF-PIC. First, we confirm that the properties of two electron-scale kinetic instabilities -- one driven by a temperature gradient and the other by pressure anisotropy -- are consistent with previous results. Then, we demonstrate that in the presence of multiple macroscopic gradients, the saturated state can differ significantly from current expectations. This work enables, for the first time, systematic and self-consistent transport modeling in weakly collisional plasmas, with broad applications in astrophysics, laser-plasma physics, and inertial confinement fusion.
Finding extremal periodic orbits with polynomial optimisation, with application to a nine-mode model of shear flow
Tobasco et al. [Physics Letters A, 382:382-386, 2018; see https://doi.org/10.1016/j.physleta.2017.12.023] recently suggested that trajectories of ODE systems that optimize the infinite-time average of a certain observable can be localized using sublevel sets of a function that arise when bounding such averages using so-called auxiliary functions. In this paper we demonstrate that this idea is viable and allows for the computation of extremal unstable periodic orbits (UPOs) for polynomial ODE systems. First, we prove that polynomial optimization is guaranteed to produce auxiliary functions that yield near-sharp bounds on time averages, which is required in order to localize the extremal orbit accurately. Second, we show that points inside the relevant sublevel sets can be computed efficiently through direct nonlinear optimization. Such points provide good initial conditions for UPO computations. As a proof of concept, we then combine these methods with a single-shooting Newton-Raphson algorithm to study extremal UPOs for a nine-dimensional model of sinusoidally forced shear flow. We discover three previously unknown families of UPOs, one of which simultaneously minimizes the mean energy dissipation rate and maximizes the mean perturbation energy relative to the laminar state for Reynolds numbers approximately between 81.24 and 125.
Stochastic Interpolants: A Unifying Framework for Flows and Diffusions
A class of generative models that unifies flow-based and diffusion-based methods is introduced. These models extend the framework proposed in Albergo & Vanden-Eijnden (2023), enabling the use of a broad class of continuous-time stochastic processes called `stochastic interpolants' to bridge any two arbitrary probability density functions exactly in finite time. These interpolants are built by combining data from the two prescribed densities with an additional latent variable that shapes the bridge in a flexible way. The time-dependent probability density function of the stochastic interpolant is shown to satisfy a first-order transport equation as well as a family of forward and backward Fokker-Planck equations with tunable diffusion coefficient. Upon consideration of the time evolution of an individual sample, this viewpoint immediately leads to both deterministic and stochastic generative models based on probability flow equations or stochastic differential equations with an adjustable level of noise. The drift coefficients entering these models are time-dependent velocity fields characterized as the unique minimizers of simple quadratic objective functions, one of which is a new objective for the score of the interpolant density. We show that minimization of these quadratic objectives leads to control of the likelihood for generative models built upon stochastic dynamics, while likelihood control for deterministic dynamics is more stringent. We also discuss connections with other methods such as score-based diffusion models, stochastic localization processes, probabilistic denoising techniques, and rectifying flows. In addition, we demonstrate that stochastic interpolants recover the Schr\"odinger bridge between the two target densities when explicitly optimizing over the interpolant. Finally, algorithmic aspects are discussed and the approach is illustrated on numerical examples.
Teleportation of entanglement over 143 km
As a direct consequence of the no-cloning theorem, the deterministic amplification as in classical communication is impossible for quantum states. This calls for more advanced techniques in a future global quantum network, e.g. for cloud quantum computing. A unique solution is the teleportation of an entangled state, i.e. entanglement swapping, representing the central resource to relay entanglement between distant nodes. Together with entanglement purification and a quantum memory it constitutes a so-called quantum repeater. Since the aforementioned building blocks have been individually demonstrated in laboratory setups only, the applicability of the required technology in real-world scenarios remained to be proven. Here we present a free-space entanglement-swapping experiment between the Canary Islands of La Palma and Tenerife, verifying the presence of quantum entanglement between two previously independent photons separated by 143 km. We obtained an expectation value for the entanglement-witness operator, more than 6 standard deviations beyond the classical limit. By consecutive generation of the two required photon pairs and space-like separation of the relevant measurement events, we also showed the feasibility of the swapping protocol in a long-distance scenario, where the independence of the nodes is highly demanded. Since our results already allow for efficient implementation of entanglement purification, we anticipate our assay to lay the ground for a fully-fledged quantum repeater over a realistic high-loss and even turbulent quantum channel.
Probing Off-diagonal Eigenstate Thermalization with Tensor Networks
Energy filter methods in combination with quantum simulation can efficiently access the properties of quantum many-body systems at finite energy densities [Lu et al. PRX Quantum 2, 020321 (2021)]. Classically simulating this algorithm with tensor networks can be used to investigate the microcanonical properties of large spin chains, as recently shown in [Yang et al. Phys. Rev. B 106, 024307 (2022)]. Here we extend this strategy to explore the properties of off-diagonal matrix elements of observables in the energy eigenbasis, fundamentally connected to the thermalization behavior and the eigenstate thermalization hypothesis. We test the method on integrable and non-integrable spin chains of up to 60 sites, much larger than accessible with exact diagonalization. Our results allow us to explore the scaling of the off-diagonal functions with the size and energy difference, and to establish quantitative differences between integrable and non-integrable cases.
Frequency-domain multiplexing of SNSPDs with tunable superconducting resonators
This work culminates in a demonstration of an alternative Frequency Domain Multiplexing (FDM) scheme for Superconducting Nanowire Single-Photon Detectors (SNSPDs) using the Kinetic inductance Parametric UP-converter (KPUP) made out of NbTiN. There are multiple multiplexing architectures for SNSPDs that are already in use, but FDM could prove superior in applications where the operational bias currents are very low, especially for mid- and far-infrared SNSPDs. Previous FDM schemes integrated the SNSPD within the resonator, while in this work we use an external resonator, which gives more flexibility to optimize the SNSPD architecture. The KPUP is a DC-biased superconducting resonator in which a nanowire is used as its inductive element to enable sensitivity to current perturbations. When coupled to an SNSPD, the KPUP can be used to read out current pulses on the few μA scale. The KPUP is made out of NbTiN, which has high non-linear kinetic inductance for increased sensitivity at higher current bias and high operating temperature. Meanwhile, the SNSPD is made from WSi, which is a popular material for broadband SNSPDs. To read out the KPUP and SNSPD array, a software-defined radio platform and a graphics processing unit are used. Frequency Domain Multiplexed SNSPDs have applications in astronomy, remote sensing, exoplanet science, dark matter detection, and quantum sensing.
kh2d-solver: A Python Library for Idealized Two-Dimensional Incompressible Kelvin-Helmholtz Instability
We present an open-source Python library for simulating two-dimensional incompressible Kelvin-Helmholtz instabilities in stratified shear flows. The solver employs a fractional-step projection method with spectral Poisson solution via Fast Sine Transform, achieving second-order spatial accuracy. Implementation leverages NumPy, SciPy, and Numba JIT compilation for efficient computation. Four canonical test cases explore Reynolds numbers 1000--5000 and Richardson numbers 0.1--0.3: classical shear layer, double shear configuration, rotating flow, and forced turbulence. Statistical analysis using Shannon entropy and complexity indices reveals that double shear layers achieve 2.8times higher mixing rates than forced turbulence despite lower Reynolds numbers. The solver runs efficiently on standard desktop hardware, with 384times192 grid simulations completing in approximately 31 minutes. Results demonstrate that mixing efficiency depends on instability generation pathways rather than intensity measures alone, challenging Richardson number-based parameterizations and suggesting refinements for subgrid-scale representation in climate models.
Entanglement-verified time distribution in a metropolitan network
The precise synchronization of distant clocks is a fundamental requirement for a wide range of applications. Here, we experimentally demonstrate a novel approach of quantum clock synchronization utilizing entangled and correlated photon pairs generated by a quantum dot at telecom wavelength. By distributing these entangled photons through a metropolitan fiber network in the Stockholm area and measuring the remote correlations, we achieve a synchronization accuracy of tens of picoseconds by leveraging the tight time correlation between the entangled photons. We show that our synchronization scheme is secure against spoofing attacks by performing a remote quantum state tomography to verify the origin of the entangled photons. We measured a distributed maximum entanglement fidelity of 0.817 pm 0.040 to the |Phi^+rangle Bell state and a concurrence of 0.660 pm 0.086. These results highlight the potential of quantum dot-generated entangled pairs as a shared resource for secure time synchronization and quantum key distribution in real-world quantum networks.
Deep Learning with Coherent Nanophotonic Circuits
Artificial Neural Networks are computational network models inspired by signal processing in the brain. These models have dramatically improved the performance of many learning tasks, including speech and object recognition. However, today's computing hardware is inefficient at implementing neural networks, in large part because much of it was designed for von Neumann computing schemes. Significant effort has been made to develop electronic architectures tuned to implement artificial neural networks that improve upon both computational speed and energy efficiency. Here, we propose a new architecture for a fully-optical neural network that, using unique advantages of optics, promises a computational speed enhancement of at least two orders of magnitude over the state-of-the-art and three orders of magnitude in power efficiency for conventional learning tasks. We experimentally demonstrate essential parts of our architecture using a programmable nanophotonic processor.
Multiplexed quantum repeaters based on dual-species trapped-ion systems
Trapped ions form an advanced technology platform for quantum information processing with long qubit coherence times, high-fidelity quantum logic gates, optically active qubits, and a potential to scale up in size while preserving a high level of connectivity between qubits. These traits make them attractive not only for quantum computing but also for quantum networking. Dedicated, special-purpose trapped-ion processors in conjunction with suitable interconnecting hardware can be used to form quantum repeaters that enable high-rate quantum communications between distant trapped-ion quantum computers in a network. In this regard, hybrid traps with two distinct species of ions, where one ion species can generate ion-photon entanglement that is useful for optically interfacing with the network and the other has long memory lifetimes, useful for qubit storage, have been proposed for entanglement distribution. We consider an architecture for a repeater based on such dual-species trapped-ion systems. We propose and analyze a protocol based on spatial and temporal mode multiplexing for entanglement distribution across a line network of such repeaters. Our protocol offers enhanced rates compared to rates previously reported for such repeaters. We determine the ion resources required at the repeaters to attain the enhanced rates, and the best rates attainable when constraints are placed on the number of repeaters and the number of ions per repeater. Our results bolster the case for near-term trapped-ion systems as quantum repeaters for long-distance quantum communications.
SeQUeNCe: A Customizable Discrete-Event Simulator of Quantum Networks
Recent advances in quantum information science enabled the development of quantum communication network prototypes and created an opportunity to study full-stack quantum network architectures. This work develops SeQUeNCe, a comprehensive, customizable quantum network simulator. Our simulator consists of five modules: Hardware models, Entanglement Management protocols, Resource Management, Network Management, and Application. This framework is suitable for simulation of quantum network prototypes that capture the breadth of current and future hardware technologies and protocols. We implement a comprehensive suite of network protocols and demonstrate the use of SeQUeNCe by simulating a photonic quantum network with nine routers equipped with quantum memories. The simulation capabilities are illustrated in three use cases. We show the dependence of quantum network throughput on several key hardware parameters and study the impact of classical control message latency. We also investigate quantum memory usage efficiency in routers and demonstrate that redistributing memory according to anticipated load increases network capacity by 69.1% and throughput by 6.8%. We design SeQUeNCe to enable comparisons of alternative quantum network technologies, experiment planning, and validation and to aid with new protocol design. We are releasing SeQUeNCe as an open source tool and aim to generate community interest in extending it.
Reconstruct Anything Model: a lightweight foundation model for computational imaging
Most existing learning-based methods for solving imaging inverse problems can be roughly divided into two classes: iterative algorithms, such as plug-and-play and diffusion methods, that leverage pretrained denoisers, and unrolled architectures that are trained end-to-end for specific imaging problems. Iterative methods in the first class are computationally costly and often provide suboptimal reconstruction performance, whereas unrolled architectures are generally specific to a single inverse problem and require expensive training. In this work, we propose a novel non-iterative, lightweight architecture that incorporates knowledge about the forward operator (acquisition physics and noise parameters) without relying on unrolling. Our model is trained to solve a wide range of inverse problems beyond denoising, including deblurring, magnetic resonance imaging, computed tomography, inpainting, and super-resolution. The proposed model can be easily adapted to unseen inverse problems or datasets with a few fine-tuning steps (up to a few images) in a self-supervised way, without ground-truth references. Throughout a series of experiments, we demonstrate state-of-the-art performance from medical imaging to low-photon imaging and microscopy.
Differentiable Quantum Architecture Search in Asynchronous Quantum Reinforcement Learning
The emergence of quantum reinforcement learning (QRL) is propelled by advancements in quantum computing (QC) and machine learning (ML), particularly through quantum neural networks (QNN) built on variational quantum circuits (VQC). These advancements have proven successful in addressing sequential decision-making tasks. However, constructing effective QRL models demands significant expertise due to challenges in designing quantum circuit architectures, including data encoding and parameterized circuits, which profoundly influence model performance. In this paper, we propose addressing this challenge with differentiable quantum architecture search (DiffQAS), enabling trainable circuit parameters and structure weights using gradient-based optimization. Furthermore, we enhance training efficiency through asynchronous reinforcement learning (RL) methods facilitating parallel training. Through numerical simulations, we demonstrate that our proposed DiffQAS-QRL approach achieves performance comparable to manually-crafted circuit architectures across considered environments, showcasing stability across diverse scenarios. This methodology offers a pathway for designing QRL models without extensive quantum knowledge, ensuring robust performance and fostering broader application of QRL.
Photon-Starved Scene Inference using Single Photon Cameras
Scene understanding under low-light conditions is a challenging problem. This is due to the small number of photons captured by the camera and the resulting low signal-to-noise ratio (SNR). Single-photon cameras (SPCs) are an emerging sensing modality that are capable of capturing images with high sensitivity. Despite having minimal read-noise, images captured by SPCs in photon-starved conditions still suffer from strong shot noise, preventing reliable scene inference. We propose photon scale-space a collection of high-SNR images spanning a wide range of photons-per-pixel (PPP) levels (but same scene content) as guides to train inference model on low photon flux images. We develop training techniques that push images with different illumination levels closer to each other in feature representation space. The key idea is that having a spectrum of different brightness levels during training enables effective guidance, and increases robustness to shot noise even in extreme noise cases. Based on the proposed approach, we demonstrate, via simulations and real experiments with a SPAD camera, high-performance on various inference tasks such as image classification and monocular depth estimation under ultra low-light, down to < 1 PPP.
Quantum Denoising Diffusion Models
In recent years, machine learning models like DALL-E, Craiyon, and Stable Diffusion have gained significant attention for their ability to generate high-resolution images from concise descriptions. Concurrently, quantum computing is showing promising advances, especially with quantum machine learning which capitalizes on quantum mechanics to meet the increasing computational requirements of traditional machine learning algorithms. This paper explores the integration of quantum machine learning and variational quantum circuits to augment the efficacy of diffusion-based image generation models. Specifically, we address two challenges of classical diffusion models: their low sampling speed and the extensive parameter requirements. We introduce two quantum diffusion models and benchmark their capabilities against their classical counterparts using MNIST digits, Fashion MNIST, and CIFAR-10. Our models surpass the classical models with similar parameter counts in terms of performance metrics FID, SSIM, and PSNR. Moreover, we introduce a consistency model unitary single sampling architecture that combines the diffusion procedure into a single step, enabling a fast one-step image generation.
Scaling of free cumulants in closed system-bath setups
The Eigenstate Thermalization Hypothesis (ETH) has been established as a cornerstone for understanding thermalization in quantum many-body systems. Recently, there has been growing interest in the full ETH, which extends the framework of the conventional ETH and postulates a smooth function to describe the multi-point correlations among matrix elements. Within this framework, free cumulants play a central role, and most previous studies have primarily focused on closed systems. In this paper, we extend the analysis to a system-bath setup, considering both an idealized case with a random-matrix bath and a more realistic scenario where the bath is modeled as a defect Ising chain. In both cases, we uncover a universal scaling of microcanonical free cumulants of system observables with respect to the interaction strength. Furthermore we establish a connection between this scaling behavior and the thermalization dynamics of the thermal free cumulants of corresponding observables.
Light Schrödinger Bridge
Despite the recent advances in the field of computational Schr\"odinger Bridges (SB), most existing SB solvers are still heavy-weighted and require complex optimization of several neural networks. It turns out that there is no principal solver which plays the role of simple-yet-effective baseline for SB just like, e.g., k-means method in clustering, logistic regression in classification or Sinkhorn algorithm in discrete optimal transport. We address this issue and propose a novel fast and simple SB solver. Our development is a smart combination of two ideas which recently appeared in the field: (a) parameterization of the Schr\"odinger potentials with sum-exp quadratic functions and (b) viewing the log-Schr\"odinger potentials as the energy functions. We show that combined together these ideas yield a lightweight, simulation-free and theoretically justified SB solver with a simple straightforward optimization objective. As a result, it allows solving SB in moderate dimensions in a matter of minutes on CPU without a painful hyperparameter selection. Our light solver resembles the Gaussian mixture model which is widely used for density estimation. Inspired by this similarity, we also prove an important theoretical result showing that our light solver is a universal approximator of SBs. Furthemore, we conduct the analysis of the generalization error of our light solver. The code for our solver can be found at https://github.com/ngushchin/LightSB
Causality and Renormalization in Finite-Time-Path Out-of-Equilibrium φ^3 QFT
Our aim is to contribute to quantum field theory (QFT) formalisms useful for descriptions of short time phenomena, dominant especially in heavy ion collisions. We formulate out-of-equilibrium QFT within the finite-time-path formalism (FTP) and renormalization theory (RT). The potential conflict of FTP and RT is investigated in g phi^3 QFT, by using the retarded/advanced (R/A) basis of Green functions and dimensional renormalization (DR). For example, vertices immediately after (in time) divergent self-energy loops do not conserve energy, as integrals diverge. We "repair" them, while keeping d<4, to obtain energy conservation at those vertices. Already in the S-matrix theory, the renormalized, finite part of Feynman self-energy Sigma_{F}(p_0) does not vanish when |p_0|rightarrowinfty and cannot be split to retarded and advanced parts. In the Glaser--Epstein approach, the causality is repaired in the composite object G_F(p_0)Sigma_{F}(p_0). In the FTP approach, after repairing the vertices, the corresponding composite objects are G_R(p_0)Sigma_{R}(p_0) and Sigma_{A}(p_0)G_A(p_0). In the limit drightarrow 4, one obtains causal QFT. The tadpole contribution splits into diverging and finite parts. The diverging, constant component is eliminated by the renormalization condition langle 0|phi|0rangle =0 of the S-matrix theory. The finite, oscillating energy-nonconserving tadpole contributions vanish in the limit trightarrow infty .
Improving thermal state preparation of Sachdev-Ye-Kitaev model with reinforcement learning on quantum hardware
The Sachdev-Ye-Kitaev (SYK) model, known for its strong quantum correlations and chaotic behavior, serves as a key platform for quantum gravity studies. However, variationally preparing thermal states on near-term quantum processors for large systems (N>12, where N is the number of Majorana fermions) presents a significant challenge due to the rapid growth in the complexity of parameterized quantum circuits. This paper addresses this challenge by integrating reinforcement learning (RL) with convolutional neural networks, employing an iterative approach to optimize the quantum circuit and its parameters. The refinement process is guided by a composite reward signal derived from entropy and the expectation values of the SYK Hamiltonian. This approach reduces the number of CNOT gates by two orders of magnitude for systems Ngeq12 compared to traditional methods like first-order Trotterization. We demonstrate the effectiveness of the RL framework in both noiseless and noisy quantum hardware environments, maintaining high accuracy in thermal state preparation. This work advances a scalable, RL-based framework with applications for quantum gravity studies and out-of-time-ordered thermal correlators computation in quantum many-body systems on near-term quantum hardware. The code is available at https://github.com/Aqasch/solving_SYK_model_with_RL.
SoDaCam: Software-defined Cameras via Single-Photon Imaging
Reinterpretable cameras are defined by their post-processing capabilities that exceed traditional imaging. We present "SoDaCam" that provides reinterpretable cameras at the granularity of photons, from photon-cubes acquired by single-photon devices. Photon-cubes represent the spatio-temporal detections of photons as a sequence of binary frames, at frame-rates as high as 100 kHz. We show that simple transformations of the photon-cube, or photon-cube projections, provide the functionality of numerous imaging systems including: exposure bracketing, flutter shutter cameras, video compressive systems, event cameras, and even cameras that move during exposure. Our photon-cube projections offer the flexibility of being software-defined constructs that are only limited by what is computable, and shot-noise. We exploit this flexibility to provide new capabilities for the emulated cameras. As an added benefit, our projections provide camera-dependent compression of photon-cubes, which we demonstrate using an implementation of our projections on a novel compute architecture that is designed for single-photon imaging.
A Quantum Algorithm for Solving Linear Differential Equations: Theory and Experiment
We present and experimentally realize a quantum algorithm for efficiently solving the following problem: given an Ntimes N matrix M, an N-dimensional vector emph{b}, and an initial vector emph{x}(0), obtain a target vector emph{x}(t) as a function of time t according to the constraint demph{x}(t)/dt=Memph{x}(t)+emph{b}. We show that our algorithm exhibits an exponential speedup over its classical counterpart in certain circumstances. In addition, we demonstrate our quantum algorithm for a 4times4 linear differential equation using a 4-qubit nuclear magnetic resonance quantum information processor. Our algorithm provides a key technique for solving many important problems which rely on the solutions to linear differential equations.
Experimental demonstration of memory-enhanced quantum communication
The ability to communicate quantum information over long distances is of central importance in quantum science and engineering. For example, it enables secure quantum key distribution (QKD) relying on fundamental principles that prohibit the "cloning" of unknown quantum states. While QKD is being successfully deployed, its range is currently limited by photon losses and cannot be extended using straightforward measure-and-repeat strategies without compromising its unconditional security. Alternatively, quantum repeaters, which utilize intermediate quantum memory nodes and error correction techniques, can extend the range of quantum channels. However, their implementation remains an outstanding challenge, requiring a combination of efficient and high-fidelity quantum memories, gate operations, and measurements. Here we report the experimental realization of memory-enhanced quantum communication. We use a single solid-state spin memory integrated in a nanophotonic diamond resonator to implement asynchronous Bell-state measurements. This enables a four-fold increase in the secret key rate of measurement device independent (MDI)-QKD over the loss-equivalent direct-transmission method while operating megahertz clock rates. Our results represent a significant step towards practical quantum repeaters and large-scale quantum networks.
Out of equilibrium Phase Diagram of the Quantum Random Energy Model
In this paper we study the out-of-equilibrium phase diagram of the quantum version of Derrida's Random Energy Model, which is the simplest model of mean-field spin glasses. We interpret its corresponding quantum dynamics in Fock space as a one-particle problem in very high dimension to which we apply different theoretical methods tailored for high-dimensional lattices: the Forward-Scattering Approximation, a mapping to the Rosenzweig-Porter model, and the cavity method. Our results indicate the existence of two transition lines and three distinct dynamical phases: a completely many-body localized phase at low energy, a fully ergodic phase at high energy, and a multifractal "bad metal" phase at intermediate energy. In the latter, eigenfunctions occupy a diverging volume, yet an exponentially vanishing fraction of the total Hilbert space. We discuss the limitations of our approximations and the relationship with previous studies.
On the Dynamics of Acceleration in First order Gradient Methods
Ever since the original algorithm by Nesterov (1983), the true nature of the acceleration phenomenon has remained elusive, with various interpretations of why the method is actually faster. The diagnosis of the algorithm through the lens of Ordinary Differential Equations (ODEs) and the corresponding dynamical system formulation to explain the underlying dynamics has a rich history. In the literature, the ODEs that explain algorithms are typically derived by considering the limiting case of the algorithm maps themselves, that is, an ODE formulation follows the development of an algorithm. This obfuscates the underlying higher order principles and thus provides little evidence of the working of the algorithm. Such has been the case with Nesterov algorithm and the various analogies used to describe the acceleration phenomena, viz, momentum associated with the rolling of a Heavy-Ball down a slope, Hessian damping etc. The main focus of our work is to ideate the genesis of the Nesterov algorithm from the viewpoint of dynamical systems leading to demystifying the mathematical rigour behind the algorithm. Instead of reverse engineering ODEs from discrete algorithms, this work explores tools from the recently developed control paradigm titled Passivity and Immersion approach and the Geometric Singular Perturbation theory which are applied to arrive at the formulation of a dynamical system that explains and models the acceleration phenomena. This perspective helps to gain insights into the various terms present and the sequence of steps used in Nesterovs accelerated algorithm for the smooth strongly convex and the convex case. The framework can also be extended to derive the acceleration achieved using the triple momentum method and provides justifications for the non-convergence to the optimal solution in the Heavy-Ball method.
Quasinormal modes in two-photon autocorrelation and the geometric-optics approximation
In this work, we study the black hole light echoes in terms of the two-photon autocorrelation and explore their connection with the quasinormal modes. It is shown that the above time-domain phenomenon can be analyzed by utilizing the well-known frequency-domain relations between the quasinormal modes and characteristic parameters of null geodesics. We found that the time-domain correlator, obtained by the inverse Fourier transform, naturally acquires the echo feature, which can be attributed to a collective effect of the asymptotic poles through a weighted summation of the squared modulus of the relevant Green's functions. Specifically, the contour integral leads to a summation taking over both the overtone index and angular momentum. Moreover, the dominant contributions to the light echoes are from those in the eikonal limit, consistent with the existing findings using the geometric-optics arguments. For the Schwarzschild black holes, we demonstrate the results numerically by considering a transient spherical light source. Also, for the Kerr spacetimes, we point out a potential difference between the resulting light echoes using the geometric-optics approach and those obtained by the black hole perturbation theory. Possible astrophysical implications of the present study are addressed.
Generative Latent Space Dynamics of Electron Density
Modeling the time-dependent evolution of electron density is essential for understanding quantum mechanical behaviors of condensed matter and enabling predictive simulations in spectroscopy, photochemistry, and ultrafast science. Yet, while machine learning methods have advanced static density prediction, modeling its spatiotemporal dynamics remains largely unexplored. In this work, we introduce a generative framework that combines a 3D convolutional autoencoder with a latent diffusion model (LDM) to learn electron density trajectories from ab-initio molecular dynamics (AIMD) simulations. Our method encodes electron densities into a compact latent space and predicts their future states by sampling from the learned conditional distribution, enabling stable long-horizon rollouts without drift or collapse. To preserve statistical fidelity, we incorporate a scaled Jensen-Shannon divergence regularization that aligns generated and reference density distributions. On AIMD trajectories of liquid lithium at 800 K, our model accurately captures both the spatial correlations and the log-normal-like statistical structure of the density. The proposed framework has the potential to accelerate the simulation of quantum dynamics and overcome key challenges faced by current spatiotemporal machine learning methods as surrogates of quantum mechanical simulators.
