Bringing the Quantum to the Classical: A Hybrid Simulation of Supernova Neutrinos

By Daniel Heimsoth, Physics PhD student

Simulating quantum systems on classical computers is currently a near-impossible task, as memory and computation time requirements scale exponentially with the size of the system. Quantum computers promise to solve this scalability issue, but there is just one problem: they can’t reliably do that right now because of exorbitant amounts of noise. 

So when UW–Madison physics postdoc Pooja Siwach, former undergrad Katie Harrison BS ‘23, and professor Baha Balantekin wanted to simulate neutrino evolution inside a supernova, they needed to get creative.  

profile photo of Pooja Siwach
Pooja Siwach

Their focus was on a phenomenon called collective neutrino oscillations, which describes a peculiar type of interaction between neutrinos. Neutrinos are unique among elementary particles in that they change type, or flavor, as they propagate through space. These oscillations between flavors are dictated by the density of neutrinos and other matter in the medium, both of which change from the core to the outer layers of a supernova. Physicists are interested in how the flavor composition of neutrinos evolve in time; this is calculated using a time evolution simulation, one of the most popular calculations currently done on quantum computers.  

Ideally, researchers could calculate each interaction between every possible pair of neutrinos in the system. However, supernovae produce around 10^58 neutrinos, a literally astronomical number. “It’s really complex, it’s very hard to solve on classical computers,” Siwach says. “That’s why we are interested in quantum computing because quantum computers are a natural way to map such problems.” 

profile photo of Katie Harrison
Katie Harrison

This naturalness is due to the “two-level” similarities between quantum computers and neutrino flavors. Qubits are composed of two-level states, and neutrino flavor states are approximated as two levels in most physical systems including supernovae.  

In a paper published in Physical Review D in October, Siwach, Harrison, and Balantekin studied the collective oscillation problem using a quantum-assisted simulator, or QAS, which combines the benefits of the natural mapping of the system onto qubits and classical computers’ strength in solving matrix equations. 

In QAS, the interactions between particles are broken down into a linear combination of products of Pauli matrices, which are the building blocks for quantum computing operations, while the state itself is split into a sum of simpler states. The quantum portion of the problem then boils down to computing products of basis states with each Pauli term in the interaction. These products are then inputted into the oscillation equations.

a graph with 4 neutrino traces in 4 colors
Flavor composition (y-axis) of four supernova neutrinos over time due to collective oscillations, calculated using the quantum-assisted simulator. The change in flavor for each neutrino over time shows the effect of neutrino-neutrino interactions.

“Then we get the linear-algebraic equations to solve, and solving such equations on a quantum computer requires a lot of resources,” explains Siwach. “That part we do on classical computers.”  

This approach allows researchers to use the quantum computers only once before the actual time evolution simulation is done on a classical computer, avoiding common pitfalls in quantum calculations such as error accumulation over the length of the simulation due to noisy gates. The authors showed that the QAS results for a four-neutrino system match with a pure classical calculation, showcasing the power of this approach, especially compared to a purely quantum simulation which quickly deviates from the exact solution due to accumulated errors from gates controlling two qubits at the same time. 

Still, as with any current application of quantum computers, there are limitations. “There’s only so much information that we can compute in a reasonable amount of time [on quantum computers],” says Siwach. She also laments the scalability of both the QAS and full quantum simulation. “One more hurdle is scaling to a larger number of neutrinos. If we scale to five or six neutrinos, it will require more qubits and more time, because we have to reduce the time step as well.” 

Harrison, who was an undergraduate physics student at UW–Madison during this project, was supported by a fellowship from the Open Quantum Initiative, a new program to expand undergrad research experiences in quantum computing and quantum information science. She enjoyed her time in the program and thinks that it benefits students looking to get involved in research in the field: “I think it’s really good for students to see what it really means to do research and to see if it’s something that you’re capable of doing or something that you’re interested in.” 

trace of neutrino flavor composition over time comparing a quantum simulation to a full classical one
Flavor composition of a neutrino over time using a full quantum simulation (red points) compared to exact solution (black line). The points start to drift from the exact solution after only a few oscillations, highlighting how noise in the quantum computer negatively affects the calculation.

 

Victor Brar earns NSF CAREER award

Congrats to associate professor Victor Brar on earning an NSF CAREER award! CAREER awards are NSF’s most prestigious awards in support of early-career faculty who have the potential to serve as academic role models in research and education and to lead advances in the mission of their department or organization.

Victor Brar

For this award, Brar will study the flow of electrons in 2D materials, or materials that are only around one atom thick. His group has already shown that when they applied a relatively old technique — scanning tunneling potentiometry, or STP — to 2D materials such as graphene, they could create unexpectedly high-contrast images, where they could track the movement of individual electrons when an electric current was applied. They found that electrons flow like a viscous fluid, a property that had been predicted but not observed directly.

“So now instead of applying electrical bias, we’ll apply a thermal bias, because we know things move from hot to cold, and then image how [electrons] move in that way,” Brar says. “Part of what’s driving this idea is that Professor Levchenko has predicted that if you image the way heat flows through a material, it should also behave hydrodynamically, like a liquid, rather than diffusive, which is how you might imagine it.”

One motivation for this research is to better understand the general flow of fluids, a problem that is often too complex for supercomputers to solve correctly. Because STP visualizes the fluid-like flow of electrons directly, Brar envisions this work as potentially providing a way of solving  fluid mechanics problems by directly imaging flow, without the need of simulations, similar to what is done in wind tunnels.

“Also, there are these predicted phases of electrons that no one has observed before,” Brar says. “We want to be the first to observe them.”

In addition to an innovative research component, NSF proposals require that the research has broader societal impacts, such as working toward greater inclusion in STEM or increasing public understanding of science. Brar’s group is using haptic pens, devices that are commonly used in remote trainings for surgeons and in the gaming community because they give a gentle push back that mimics a realistic touch. By attaching the haptic pen to a scanning tunneling microscope (STM), people holding the pen can “feel” the individual atoms and surfaces that the STM is touching.

“We think materials science is one of those areas where feeling the forces that hold matter together may provide more intuitive than looking at equations,” Brar says. “We’re making virtual crystal lattices that you can touch with the haptic pen and feel how the atoms fix together, but we’re also making it so you can feel the different forces of the different atoms used.”

Brar plans to introduce the haptic pen and atom models into Physics 407 and develop a materials science module for the UW Alumni Association’s Grandparents University. And because the haptic pen relies almost entirely on touch, Brar plans to work with the Wisconsin Council of the Blind and Visually Impaired to improve access to materials science instruction for people with vision impairments.

 

 

“Sandwich” structure found to reduce errors caused by quasiparticles in superconducting qubits

Qubits are notoriously more prone to error than their classical counterparts. While superconducting quantum computers currently use on the order of 100 to 1000 qubits, an estimated one million qubits will be needed to track and correct errors in a quantum computer designed for real-world applications. At present, it is not known how to scale superconducting qubit circuits to this size.

In a new study published in PRX Quantum, UW–Madison physicists from Robert McDermott’s group developed and tested a new superconducting qubit architecture that is potentially more scalable than the current state of the art. Control of the qubits is achieved via “Single Flux Quantum” (SFQ) pulses that can be generated close to the qubit chip. They found that SFQ-based control fidelity improved ten-fold over their previous versions, providing a promising platform for scaling up the number of qubits in a quantum array.

profile photo of Robert McDermott
Robert McDermott
profile photo of Vincent Liu
Vincent Liu

The architecture involves a sandwich of two chips: one chip houses the qubits, while the other contains the SFQ control unit. The new approach suppresses the generation of quasiparticles, which are disruptions in the superconducting ground state that degrade qubit performance.

“This structure physically separates the two units, and quasiparticles on the SFQ chip cannot diffuse to the quantum chip and generate errors,” explains Chuan-Hong Liu, PhD ’23, a former UW–Madison physics graduate student and lead author of the study. “This design is totally new, and it greatly improves our gate fidelities.”

Liu and his colleagues assessed the fidelity of SFQ-based gates through randomized benchmarking. In this approach, the team established operating parameters to maximize the overall fidelity of complex control sequences. For instance, for a qubit that begins in the ground state, they performed long sequences incorporating many gates that should be equivalent to an identity operation; in the end, they measured the fraction of the population remaining in the ground state. A higher measured ground state population indicated higher gate fidelity.

Inevitably, there are residual errors, but the reduced quasiparticle poisoning was expected to lower the error rate and improve gate fidelities — and it did.

four panels showing the new chip architecture. The two on the left just show the two computer chips, and then the top right panel shows them "sandwiched" on top of each other. The bottom right panel is a circuit diagram of the whole setup.
The quantum-classical multichip module (MCM). (a) A micrograph of the qubit chip. (b) A micrograph of the SFQ driver chip. (c) A photograph showing the assembled MCM stack; the qubit chip is outlined in red and the SFQ chip is outlined in blue. (d) The circuit diagram for one qubit-SFQ pair. | From Liu et al, PRX Quantum.

“Most of the gates had 99% fidelity,” Liu says. “That’s a one order of magnitude reduction in infidelity compared to the last generation.”

Importantly, they showed the stability of the SFQ-based gates over the course of a six-hour experimental run.

Later in the study, the researchers investigated the source of the remaining errors. They found that the SFQ unit was emitting photons with sufficient energy to create quasiparticles on the qubit chip. With the unique source of the error identified, Liu and his colleagues can develop ways to improve the design.

“We realized this quasiparticle generation is due to spurious antenna coupling between the SFQ units and the qubit units,” Liu says. “This is really interesting because we usually talk about qubits in the range of one to ten gigahertz, but this error is in the 100 to 1000 gigahertz range. This is an area people have never explored, and we provide a straightforward way to make improvements.”

This study is a collaboration between the National Institute of Standards and Technology, Syracuse University, Lawrence Livermore National Laboratory, and UW–Madison.

This work was funded in part by the National Science Foundation (DMR-1747426); the Wisconsin Alumni Research Foundation (WARF) Accelerator; Office of the Director of National Intelligence, Intelligence Advanced Research Projects Activity (IARPA-20001-D2022-2203120004); and the NIST Program on Scalable Superconducting Computing and the National Nuclear Security Administration Advanced Simulation and Computing Beyond Moore’s Law program (LLNL-ABS-795437).

Choy leads team awarded National Science Foundation Quantum Sensing Challenge Grant

The National Science Foundation has selected a proposal “Compact and robust quantum atomic sensors for timekeeping and inertial sensing” by an interdisciplinary team led by University of Wisconsin-Madison researchers for...

Read the full article at: https://engineering.wisc.edu/blog/choy-leads-team-awarded-national-science-foundation-quantum-sensing-challenge-grant/

NASA funds Fundamental Physics proposal from Shimon Kolkowitz

This post is adapted from a NASA news release; read the original here

NASA’s Fundamental Physics Program has selected seven proposals, including one from UW–Madison physics professor Shimon Kolkowitz, submitted in response to the Research Opportunities in Space and Earth Sciences – 2022 Fundamental Physics call for proposal.

The selected proposals are from seven institutions in seven states, with the total combined award amount of approximately $9.6 million over a five-year period. Kolkowitz’s proposal is ““Developing new techniques for ultra-high-precision space-based optical lattice clock comparisons.” 

Three of the selected projects will involve performing experiments using the Cold Atom Laboratory (CAL) aboard the International Space Station (ISS). Four of the selected proposals call for ground-based research to help NASA identify and develop the foundation for future space-based experiments.

The Fundamental Physics Program is managed by the Biological and Physical Sciences Division in NASA’s Science Mission Directorate. This program performs carefully designed research in space that advances our understanding of physical laws, nature’s organizing principles, and how these laws and principles can be manipulated by scientists and technologies to benefit humanity on Earth and in space.

Beating the diffraction limit in diamonds

by Daniel Heimsoth

Resolving very small objects that are close together is a frequent goal of scientists, making the microscope a crucial tool for research in many different fields from biology to materials science.

The resolution of even the best modern confocal microscopes — a common optical microscope popular in biology, medicine, and crystallography — is limited by an optical bound on how narrow a laser beam can be focused, known as the diffraction limit.

In a study recently published in the journal ACS Photonics, UW–Madison physics professor Shimon Kolkowitz and his group developed a method to image atomic-level defects in diamonds with super-resolution, reaching a spatial resolution fourteen times better than the diffraction limit achievable with their optics. And, because the technique uses a standard confocal microscope, this super-resolution should be available to any researchers that already have access to this common equipment.

profile photo of Aedan Gardill
Aedan Gardill

While methods to achieve super-resolution already exist, such as stimulated emission depletion microscopy (STED), nearly all of these methods either require the addition of special optics, which can be expensive and difficult to install, or specialized samples and extensive post processing of the data. The UW–Madison technique, which they call “super-resolution Airy disk microscopy” (SAM), avoids such barriers to entry.

“You can get this all for free with the existing setup that a lot of labs already have, and it performs almost just as well,” says Aedan Gardill, a graduate student in Kolkowitz’s group and lead author of the paper. “We were able to get resolution down to twenty nanometers, which is comparable with standard techniques using [STED].”

The ‘Airy disk’ in SAM refers to a key feature of light beams that gives rise to the diffraction limit but which the researchers turned to their advantage.

Confocal microscopes use laser beams of specific wavelengths to excite matter in a sample, causing that matter to emit light. On the microscopic scale, the laser beam does not create a solid circle of light on the sample in the same way a flashlight would.  Rather, light hits the object in a series of light and dark rings called an Airy pattern. Within the dark rings, the matter receives no light, which means it cannot be detected by the microscope’s light sensors.

The novelty of the SAM technique is in its two laser beam pulses, one spatially offset from the other such that the overlapping Airy patterns can distinguish between two closely spaced objects.

In their paper, the research team studied nitrogen-vacancy (NV) centers in diamond crystal, which are regions in the crystal lattice where one of two neighboring carbon atoms is replaced by a nitrogen atom, and the other is left empty. NV centers are known to have two different charge states based on how many electrons are in the defect, one that fluoresces and one that remains dark when yellow light is applied to them.

To resolve two NV centers separated by a distance less than the diffraction limit of the microscope, the SAM procedure first shines green light on them, preparing both centers into their fluorescent charge state. Then, a red laser is applied, offset such that only one of the two NV centers is in the dark ring of the Airy pattern and thus is not affected by the beam. The NV center that does see the red light is switched to the dark state.

a cartoon-rendered image of a microscope objective, with a red cylinder (light) hitting a sample that shows concentric rings of red and blue, as described in the text
Super-resolution Airy disk Microscopy uses the Airy disk (red pattern) generated by diffraction from an objective lens aperture (gray cylinder) to localize and control an emitter (here a nitrogen vacancy center in diamond) below the diffraction limit. Emitter fluorescence is suppressed everywhere except in a very narrow ring (blue donut).

“It goes to another dark charge state where it does not interact with yellow light,” Gardill explains. “But the initial bright charge state does interact with yellow light and will emit light.”

Finally, when the yellow laser is applied, one NV center emits light while the other does not, effectively differentiating between the two neighboring sites. By repeating these steps iteratively over a grid, the researchers could reconstruct a full image of the two nearby NVs with spectacular resolution.

The idea for this technique came as a bit of a surprise while the team was studying charge properties of NV centers in 2020.

“We tried the combinations of red-green, green-red, red-red, green-green with those first two [laser] pulses, and the one that was green then red, we ended up seeing this ring,” Gardill recounts. “And Shimon was like, ‘The width of the ring is smaller than the size of [the confocal image of] the NV. That is super-resolution.’”

This method could find wide use in many different fields, including biology and chemistry where NV centers are used as nanoscale sensors of magnetic and electric fields and of temperature in compounds and organic material. NV centers have also been studied as candidates for quantum repeaters in quantum networks, and the research team has considered the feasibility of using the SAM technique to aid in this application. Currently, the SAM method has only been applied to NV centers in diamond crystal, and more research is needed to extend its use to different systems.

That all of this can be done with hardware that many labs across the world already have access to cannot be overstated. Gardill reiterates, “If they have a basic confocal microscope and don’t want to buy another super-resolution microscope, they can utilize this technique.”

This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award #DE-SC0020313.

Daniel Heimsoth is a second-year PhD student in Physics. This was his first news story for the department.

Chicago State University students gain quantum experience through HQAN summer internships

profile photos of Anosh Wasker, Dominique Newell, Gabrielle Jones-Hall, and Ryan Stempek

This story was adapted from one originally published by HQAN

Over the past summer, the NSF Quantum Leap Challenge Institute for Hybrid Quantum Architectures and Networks (HQAN) offered a 12-week “Research Experiences for CSU Students” internship opportunity that provided students and recent graduates from Chicago State University (CSU) with virtual research experiences addressing quantum science topics. In an August 20 online poster session, students presented the results of their summer projects to HQAN’s university and industry partners.

Mallory Conlon, HQAN’s outreach program coordinator and the quantum science outreach program coordinator with the UW–Madison department of physics, explained that this year’s program was the pilot offering. “We wanted to make sure we had the support and activity structures right before expanding this to more [minority serving institutions] (MSIs) and other underrepresented groups across the Midwest. We’re currently evaluating the program and aim to develop an expanded internship for summer 2022.” For the pilot, CSU was chosen as the sole participating MSI because of its proximity to the University of Chicago (one of HQAN’s three university partners), and because of HQAN staff connections to CSU.

The posters presented on August 20 included Anosh Wasker’s “Quantum Games for Pedagogy” (advised by Russell Ceballos of the Chicago Quantum Exchange); Dominique Newell’s “Super-Resolution Microscopy Using Nitrogen Vacancy Centers in Diamond to Analyze the Optical Near Field Diffraction Limit” (advised by Shimon Kolkowitz of the University of Wisconsin–Madison); Gabrielle Jones-Hall’s “Demonstrating Entanglement” (advised by Paul Kwiat of the University of Illinois at Urbana-Champaign (UIUC)); and Ryan Stempek’s “Quantum vs. Classical Boltzmann Machines for Learning a Quantum Circuit” (advised by Bryan Clark of UIUC).

Wasker is pursuing a Master’s at CSU; his long-term goals are to go for a PhD and then work in industry. Over the summer, he developed an air-hockey-inspired computer game that teaches players some of the counterintuitive concepts involved in quantum—particularly the Hong-Ou-Mandel (HOM) effect. He says he’s passionate about quantum science and has noticed that many opportunities are coming up in the field, but that it’s difficult for people to find “access points” into learning about this intimidating topic so that they can seize those opportunities. His summer project was inspired by his belief that learning through play is a powerful way to gain understanding.

Newell recently graduated from CSU with a BS in physics, with a minor in chemistry. She spent the summer studying the propagation of light through a laser beam that travels through a nitrogen vacancy center in diamond, as observed through a confocal microscope. The goal was to locate the zero intensity points above and below the focal plane of a Gaussian beam by using its own electromagnetic field.

Jones-Hall is now in graduate school at Mississippi Valley State University. She’s working towards a Master’s in Bioinformatics but plans to return to quantum after completing that degree, so her internship project—which worked on developing a quantum-themed escape room designed to teach players the concept of quantum entanglement—will be relevant to her later work.

Stempek will graduate in December with a Master’s in computer science and then work in industry. His summer project aimed to show that a quantum Restricted Boltzmann Machine (Q-RBM) has the potential to learn the probability distribution over a set of inputs more accurately than a classical RBM (C-RBM) can for the same circuit. He says the internship was a great opportunity for him to further build his Python skills and problem-solve through the ups and downs of research. “[It] was really beneficial,” he says, “and actually, moving into industry, I feel that I’ll have a greater sense of self-confidence… It was a great experience!”

HQAN is a partnership among the University of Chicago, UIUC, and the University of Wisconsin–Madison and is funded by the National Science Foundation.