Tuesday, March 18, 2025
Home Blog Page 1397

Detecting Planck-scale dark matter by leveraging quantum interference

0
Detecting Planck-scale dark matter by leveraging quantum interference


Study highlights a possible way to detect Planck-scale dark matter leveraging quantum interference
A fast moving Planck-mass dark matter particle (blue) flying next to a quantum particle in a state which is the superposition of two positions (red) induces a phase shift between the two quantum branches, which can be detected by quantum interference. This mechanism, implemented by multiple Josephson-junctions, might underpin a purely-gravitational dark matter detector. Credit: Marios Christodoulou, Alejandro Perez and Carlo Rovelli.

While various studies have hinted at the existence of dark matter, its nature, composition and underlying physics remain poorly understood.

In recent years, physicists have been theorizing about and searching for various possible dark matter candidates, including particles with masses in the Planck scale (around 1.22×1019 GeV or 2.18×10−8 kg) that could be tied to quantum gravity effects.

Researchers at Aix-Marseille University and the Institute for Quantum Optics and Quantum Information recently hypothesized that Planck-scale dark matter could be detected using highly sensitive gravity-mediated quantum phase shifts. Their paper, published in Physical Review Letters, introduces a protocol designed to enable the detection of these hypothetical dark matter particles using Josephson junctions.

“This study originated from an idea raised by Alejandro Perez,” Carlo Rovelli, co-author of the paper, told Phys.org.

“The three of us were all teaching at a Quantum Gravity and Quantum Information school organized in the French countryside by the QISS research consortium and, since we know each other but we usually live in different cities, we decided to share an apartment while there.

“Alejandro had the idea that a special kind of quantum interference generated by a gravitational force that is discussed as a possible way of revealing a quantum gravity effect in a laboratory, could also be used to detect Planck-scale dark matter.”

Christodoulou and his colleagues at the Institute for Quantum Optics and Quantum Information had been exploring the possibility of detecting dark matter particles with masses in the Planck-scale for a few years. Originally, they focused on the possibility of detecting these particles using a quantum sensor, an idea that Christodoulou also discussed with Rovelli at a workshop in Greece in 2022.

“I had a student trying a calculation which was about the classical motion of the particle due to its gravitational attraction and was seen at the time as a preliminary step to think of quantum sensing using technologies developed at Vienna. Yet this was a wrong idea,” said Marios Christodoulou, co-author of the study.

“While I was giving a course on the theory behind the gravity, mediated entanglement experiments in France, a main point I was driving was precisely that while the effect of gravity is typically thought to be ‘things falling into each other,’ the reason interferometry can amplify the miniscule effect of gravity is that it has nothing to do with that, but only to do with the value of the action which can take different values in a quantum setting even neglecting the ‘things falling into each other.'”

When he was at the University of Toulon in France, Christodoulou started discussing the ideas he was exploring in his research with Alejandro Perez, a Senior Professor at the University. This initiated the collaboration that ultimately led to this study.

“I then told him that I have a student trying to calculate the ‘things falling into each other’ effect for a classical sensor, which would allows us subsequently think of a quantum sensor. Alejandro mentioned that I had just argued that this is the wrong thing to do, which it was and I had not realized it,” said Christodoulou. “That is when the idea clicked and then Alejandro spent a few days on his laptop doing the calculation that is the backbone of the paper.”

The study by this group of researchers builds on previous studies by Rovelli, which described Planckian black holes (black holes with Planck-level masses) from the theoretical standpoint of loop quantum gravity theory. His theory suggested that these particles only interact gravitationally, which made them promising dark matter candidates.

“I became obsessed with this idea in 2021, when I realized that a sufficiently hot big bang would produce exactly the right amount of such black holes needed to explain the observed dark matter abundance today,” said Perez.

“The big bang needs to be at an initial temperature close to the Planck temperature, which is also a natural possibility from the perspective of quantum gravity. I call this ‘the gravitational miracle’ by analogy of the so-called WIMP miracle that motivated the search for WIMPS when people believed strongly in supersymmetry). Since then, I was trying hard to find some observational handle of this idea or, in other words, if dark matter is made of such tiny black holes, how could we prove it?”

Rovelli, Christodoulou and Perez subsequently started exploring this idea more in-depth and trying to identify potential ways to test it. They first focused on potential methods of testing quantum mechanics in instances where gravity is relevant.

“I attended a lecture by Markus Aspelmeyer at the QISS conference where incredible experiments in this realm, that seemed impossible some time ago, are being performed,” said Perez. “That afternoon the three of us engaged in discussions and the idea of the paper naturally emerged.”

Based on Rovelli’s previous theoretical studies of black holes, the researchers hypothesized that Planck-scale objects do exist. In these past papers, they proposed that at the end of their lives, black holes could become Planck-scale particles with long lifetimes. These particles would be extremely tiny and yet possess considerable masses, around a few fractions of a microgram.

“Our main hypothesis was that Planck mass particles with a cross section of about Planck exist in nature,” said Christodoulou.

“These would have a relatively significant gravitational attraction since Planck mass is about the mass of a human hair. It is small but large enough for its gravitational attraction to be barely detectable. These make very natural dark matter candidates because we know dark matter interacts gravitationally but in no other way significantly, and this is how these particles would be expected to behave.”

Essentially, the researchers proposed that a test particle (i.e., probe) in a superposition (i.e., existing simultaneously in multiple states), which is at two different locations, would feel a gravitational field at both these locations when a particle with a Planck-scale mass passes it by. This would produce a quantum effect that could be detectable if the two states of the probe are experimentally prompted to interfere with one another.

“To actually measure the effect (as the wave function only tells us what the probability of finding where the probe particle is) one has to repeat the observation many times and do statistics,” said Perez.

“The problem is that we do not have such a luxury as the dark matter particles are very rare (their density is very small) and so the experiment cannot be repeated many times at will.

“For practical issues, it is best to assume that the probe particle has spin (as an electron) and then it is easier to produce an ideal experiment where one measures the interference (not in position) but in the spin variable. Yet the difficulty of having to repeat the experiment many times remains in this improved scenario.”

In their paper, the team show that it could be possible to search for Planck-scale particles using a system where many particles are in a coherent collective quantum state. This protocol would eliminate the need to repeat an experiment several times.

“One has about 1023 electrons/cm3 in a special quantum state where all behave like a single one (they are described by a collective single wave function),” said Perez.

“In a Josephson junction they are (in a way) in a superposition of different locations at each side of the junction (a spatial gap separating two superconductors). The passage of a dark matter particle acts differently (gravitationally) on each side of the junction because they are at different distances, the interference between the wave function at the two sides produces a macroscopic effect: a current across the junctions (electrons tunneling across the gap).”

The protocol proposed by the researchers eliminates the need to repeat an experiment several times. This is due to the large number of electrons involved in a single passage of a Planck-scale dark matter particle, which reduces the need for statistical calculations.

“The current across the gap is the average (in the statistical sense) of the probabilistic response of each of the 1023 electrons/cm3,” Perez said. “It is as if a macroscopic number of experiments of the first type would have been done at once.”

This recent paper by Rovelli, Christodoulou and Perez could soon open new possibilities for the search of Planck-scale dark matter particles. In the future, the protocol they proposed could contribute to the first detection of these highly elusive particles.

“Our work provides a concrete way to detect such particles,” said Rovelli.

“The interest is that such particles could be a major component of the mysterious dark matter that is revealed by the astronomers. If the detection we propose could be achieved, it would be spectacular: at the same time, it would tell us what dark matter is, it would validate the quantum gravity ideas, leading to the idea that this particle exists, and in particular loop quantum gravity, which is the basis of the prediction, and it would also reveal a new kind of object in nature: these Planck scale particles.”

The protocol developed by this research team could serve as the basis for the development of new detectors to search for dark matter particles with Planck-scale masses. Rovelli, who is a theoretical physicist, is currently conducting new studies aimed at understanding how black holes might evolve into these hypothetical dark matter particles.

“The detection of such particles will be a huge challenge technologically and there may be room to think of other ways of detection, using the same principle but different sensors,” said Christodoulou. “This is something that I keep in the back of my head and think about.”

While Rovelli is now continuing his theoretical work, Christodoulou and Perez have initiated collaborations with other experimental physicists, such as Gerard Higgins and Martin Zemlicka at the OEAW in Vienna. These collaborations could lead to studies exploring the possibility of measuring gravitational fields using superconductors.

“I believe that the hypothesis that dark matter is made of Planckian mass particles must have other observational consequences in astrophysics,” added Perez.

“For example, their extremely weak interaction with other particles (combined with their quantum mechanical nature) might imply that such dark matter behaves differently than expected when forming structure via their gravitational attraction: it is possible that it could explain some puzzles in the structure of the galactic halos.”

More information:
Marios Christodoulou et al, Detecting Planck-Scale Dark Matter with Quantum Interference, Physical Review Letters (2024). DOI: 10.1103/PhysRevLett.133.111001.

© 2024 Science X Network

Citation:
Detecting Planck-scale dark matter by leveraging quantum interference (2024, October 8)
retrieved 8 October 2024
from https://phys.org/news/2024-10-planck-scale-dark-leveraging-quantum.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

New apps aid blind people in navigating indoor spaces

0
New apps aid blind people in navigating indoor spaces


New apps will enable safer indoor navigation for blind people
A blind person navigates indoors with the help of the new app and a blind dog. Credit: Roberto Manduchi

Two new apps are set to assist blind individuals in navigating indoor spaces by providing spoken directions through a smartphone. This offers a safe solution for wayfinding in areas where GPS is ineffective.

UC Santa Cruz professor of Computer Science and Engineering Roberto Manduchi has devoted much of his research career to creating accessible technology for the blind and visually impaired. Throughout years of working with these communities, he has learned that there is a particular need for tools to help with indoor navigation of new spaces.

“Moving about independently in a place that you don’t know is particularly difficult, because you don’t have any visual reference—it’s very easy to get lost. The idea here is to try to make this a little bit easier and safer for people,” Manduchi said.

In a new paper published in the journal ACM Transactions on Accessible Computing, Manduchi’s research group presents two smartphone apps that provide indoor wayfinding, navigation to a specific point, and safe return, the process of tracing back a past route. The apps give audio cues and don’t require a user to hold their smartphone in front of themselves, which would be inconvenient and attract undue attention.

Safer, scalable technology

Smartphones provide a great platform for hosting accessible technology because they are cheaper than a dedicated hardware system, have the support of the company’s information technology teams, and are equipped with built-in sensors and accessibility features.

Other smartphone-based wayfinding systems require a person to walk with their phone out, which can create several problems. A blind person navigating a new space often has at least one hand in use for a guide dog or a cane, so using the other for a phone is less than ideal. Holding a phone out also leaves the navigator vulnerable to crime, and people with disabilities already experience criminality at disproportionately higher rates.

While companies like Apple and Google have developed indoor wayfinding for some specific locations, such as major airports and stadiums, their methods depend on sensors that are installed inside these buildings. This makes it a much less scalable solution due to the cost of adding and maintaining extra infrastructure.

Using built-in sensors

Manduchi’s wayfinding app provides a route in a similar way to GPS services like Google Maps; however, GPS-based systems don’t work indoors because the satellite signal is distorted by a building’s walls. Instead, Manduchi’s system uses other sensors within a smartphone to provide spoken instructions to navigate an unfamiliar building.

The wayfinding app works by using a map of the inside of a building to find a path toward the destination, and then uses a phone’s built-in inertial sensors, accelerometers and gyros, which provide for features like a step counter, to track the navigator’s progress along the path.

The same sensors can also track the orientation of the phone and therefore the navigator. However, the estimated location and orientation are often somewhat inaccurate, so the researchers incorporated another method called particle filtering to enforce the physical constraints of a building so it does not interpret that the navigator is walking through walls or other impossible situations.

The backtracking app simply inverts a route previously taken by a navigator, helpful for situations in which a blind person is guided into a room and wants to leave independently. In addition to inertial sensors, it uses the phone’s magnetometer to identify characteristic magnetic field anomalies, typically created by large appliances, which can serve as landmarks within a building.

Communicating directions

Both systems give directions through spoken communication and can also be used with a smartwatch to supplement the instructions with vibrations. Overall, the researchers tried to minimize the amount of input given to the navigator so that they could focus on safety.

They also rely on the navigator to make judgements about where to turn, to account for any error in tracking. The system instructs a person to make their next directional change five meters before it anticipates the turn will occur, with directions like “at the upcoming junction, turn left,” and the navigator can begin to find the turn with the help of their cane or guide dog.

“Sharing responsibility, in my opinion, is the right approach,” Manduchi said. “As a philosophy, you cannot rely on technology alone. That is also true when you drive a car—if it says turn right, you don’t just immediately turn right, you look for where the junction is. You need to work with the system.”

Testing their systems in the Baskin Engineering building at UC Santa Cruz, the research team found that users were able to successfully navigate the many hallways and turns. The team will continue to polish their apps, which use the same interface but are separate for the ease of development.

Going forward, they will focus on integrating AI features that could allow a navigator to take a photo of their surroundings and get a scene description if they are in a particularly hard area to navigate, like an alcove of a building or a wide-open space. They also want to enhance the ability to access and download building maps, perhaps taking advantage of an open source software ecosystem to do so.

“I’m very grateful to the blind community in Santa Cruz, who gave me fantastic advice. [As engineers creating technology for the blind community], you have to be very, very careful and very humble, and start from the person who will use the technology, rather than from the technology itself,” Manduchi said.

More information:
Chia Hsuan Tsai et al, All the Way There and Back: Inertial-Based, Phone-in-Pocket Indoor Wayfinding and Backtracking Apps for Blind Travelers, ACM Transactions on Accessible Computing (2024). DOI: 10.1145/3696005

Citation:
New apps aid blind people in navigating indoor spaces (2024, October 8)
retrieved 8 October 2024
from https://techxplore.com/news/2024-10-apps-aid-people-indoor-spaces.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Research suggests Earth’s oldest continental crust is disintegrating

0
Research suggests Earth’s oldest continental crust is disintegrating


Earth's oldest continental crust is disintegrating
Models of North China Craton deformation since the middle Jurassic, showing phases of flat slab subduction (a, b) and rollback (c, d). Key: overriding plate (O), downgoing plate (D), trench (T), leading edge of flat slab (L), dynamic topography (DT), continental crust (CC), cratonic lithospheric mantle (CLM), oceanic crust (OC), and oceanic lithospheric mantle (OLM). Credit: Liu et al, 2024.

Earth’s continental configurations have changed dramatically over its billions of years’ history, transforming not only their positions across the planet, but also their topography as expansion and contraction of the crust made a mark on the landscape. Some areas of continental crust have maintained long-term stability from the beginning of Earth’s history, with little destruction by tectonic events or mantle convection, known as cratons.

Recent research, published in Nature Geoscience, has considered the mechanisms by which these cratons may have deformed, a process termed decratonization.

While subduction (when a denser tectonic plate is forced beneath the other into the underlying mantle where it melts) and deep mantle plumes (when a segment of the mantle rises to the surface due to its buoyancy and thermally erodes the crust) have been proposed as possible causes, the mechanisms driving the deformation and eventual destruction of Earth’s cratons remain elusive.

Professor Shaofeng Liu, of China University of Geosciences, and colleagues have investigated the disintegration of one particular craton over a period of 200 million years.

To do so, the research team considered the North China Craton (NCC), western Pacific Ocean, since the middle Mesozoic (168 million years ago, Ma) using four-dimensional mantle flow models of Earth’s plate-mantle system. This included data on the evolution of surface topography, deformation of the lithosphere (crust and uppermost mantle) and seismic tomography (a technique that uses seismic waves to generate 3D models of Earth’s interior).

They identified two stages of major change for the NCC that have led to its deformation through time. Initially, subduction of the shallowly-dipping oceanic Izanagi plate (flat-slab subduction) from the east led to thickening of the overriding NCC crust of the Eurasian plate as it was shortened due to compression of the land and formed topographic highs (i.e., mountain ranges, the furthest extent presenting itself as the Taihang Mountains on the surface). This occurred due to eastwards movement of the Eurasian plate at pace.

A subsequent phase of rapid flat-slab rollback (when the subducting plate retreats back to the surface) led to lithospheric extension and thinning by 26% compared to its initial thickness. This resulted from the movement of the NCC changing from eastwards to southwards, slowing the convergence of the two plates.







Reconstruction of North China Craton flat slab subduction and rollback from 180 to 86 million years ago. Credit: Liu et al, 2024

These two stages occurred over millions of years in multiple phases, beginning with north-east trending thrust (older rocks pushed above younger rocks) and transpressional (horizontal displacement of rocks with an added shortening perpendicular to the movement) faults during the Jurassic and early Cretaceous (from the beginning of the study period at ~200 Ma through to 136 Ma).

From 136 Ma there were several episodes of crustal extension, before this was interrupted by compression through 93–80 Ma in the late Cretaceous, with subsequent continued extension through to the present day, ultimately leading to disintegration of the craton.

To validate these findings, the scientists generated three flow models to reconstruct the tectonic history of the region, based upon predictions of their structures in the modern day and comparison to seismic tomography data.

The validated flat-slab rollback model accurately reproduced a 4,000-km wide and up to 660 km deep slab within the mantle transition zone, which ultimately went on to form a large mantle wedge.

This feature is evidenced in the volcanic rock record observed today, with carbonates recycled from the subducted slab into the upper mantle forming characteristic carbonated peridotite. Over tens of millions of years, this mantle wedge eventually disappeared as slab rollback progressed.

Decratonization of the NCC is not an isolated event, with Professor Liu suggesting other areas of the planet may have experienced similar processes, with local differences, and is the focus of continued research.

“The North American craton, South American craton and the Yangtze craton in China may have experienced similar deformation. All of these may have experienced early flat-slab subduction. However, intense subsequent rollback subduction might have occurred in the Yangtze craton. In contrast, the North American craton underwent trench retreat following flat-slab subduction but did not exhibit significant slab rollback.”

Overall, this research highlights how cratons in continental interiors are less likely to be destabilized compared to those close to plate boundaries, which may be susceptible to subduction and rollback processes over time.

“Ancient lithosphere can be broken apart, and this disintegration can be caused by this special form of subduction occurring near oceanic plates, revealing how the continents evolved over Earth’s history,” Professor Liu concludes.

More information:
Shaofeng Liu et al, Craton deformation from flat-slab subduction and rollback, Nature Geoscience (2024). DOI: 10.1038/s41561-024-01513-2

© 2024 Science X Network

Citation:
Research suggests Earth’s oldest continental crust is disintegrating (2024, October 8)
retrieved 8 October 2024
from https://phys.org/news/2024-10-earth-oldest-continental-crust-disintegrating.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Sperm whale departure linked to decline in jumbo squid population in Gulf of California

0
Sperm whale departure linked to decline in jumbo squid population in Gulf of California


Sperm whale departure linked to decline in jumbo squid population in Gulf of California: new study unveils long-term impact on ecosystem health
Credit: Hector Perez-Puig, CC BY

A PeerJ study has revealed a significant departure of sperm whales (Physeter macrocephalus) from the central portion of the Gulf of California, is linked to the collapse of the jumbo squid (Dosidicus gigas) population, their primary prey.

The study, led by researchers Msc. Héctor Pérez-Puig and Dr. Alejandro Arias Del Razo, offers insight into the relationship between apex marine predators and their environment, highlighting sperm whales as key indicators of oceanic health.

The research, conducted over a 9-year period in the eastern Midriff Islands Region of the Gulf of California, utilized extensive survey data and photo-identification techniques to track sperm whale populations. Findings indicate a striking correlation between the decline of jumbo squid and the disappearance of sperm whales from the region, with no sightings recorded from 2016 to 2018.

Key findings:

  • Population decline: Between 2009 and 2015, the population of sperm whales in the central Gulf of California ranged between 20 and 167 individuals, with a total “super population” of 354 whales. However, from 2016 to 2018, sperm whale sightings ceased entirely.
  • Impact of jumbo squid collapse: General additive models show a positive relationship (R2 = 0.644) between sperm whale sightings and jumbo squid landings, indicating that as squid populations dwindled, sperm whales left the region.
  • Environmental drivers: The decline of both species is attributed to environmental changes, including sustained ocean warming and intensified El Niño events, which have shifted the ecosystem dynamics in the Gulf of California. The jumbo squid population has been particularly affected, showing a shift to smaller phenotypes, which may no longer sustain larger predators like sperm whales.

Ecosystem implications

Sperm whales, as apex predators, play a crucial role in controlling energy flow within marine ecosystems. Their departure from the Gulf of California suggests broader ecosystem changes and raises concerns about the long-term health of the region. The study underscores the importance of long-term data collection in understanding population trends and the effects of climate change on marine species.

Lead author Héctor Pérez-Puig emphasized the broader ecological implications of the findings, “The departure of sperm whales from the Gulf of California serves as a sentinel signal, reflecting significant shifts in marine ecosystems. As the environment changes, so too does the delicate balance between predators and prey.”

The study calls for more detailed analysis to fully understand the movements of sperm whales and their prey, particularly in light of the ongoing “tropicalization” of the Gulf of California. Researchers recommend continued monitoring to assess the impact of environmental changes on marine species and the overall health of the ecosystem.

This research offers a vital contribution to the field of marine biology and ecology, with implications for the conservation of both sperm whales and the larger marine environment in the Gulf of California.

More information:
The departure of sperm whales (Physeter macrocephalus) in response to the declining jumbo squid (Dosidicus gigas) population in the central portion of the Gulf of California. PeerJ (2024). DOI: 10.7717/peerj.18117

Journal information:
PeerJ


Citation:
Sperm whale departure linked to decline in jumbo squid population in Gulf of California (2024, October 8)
retrieved 8 October 2024
from https://phys.org/news/2024-10-sperm-whale-departure-linked-decline.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Nobel Prize in physics awarded to 2 scientists for discoveries that enabled machine learning

0
Nobel Prize in physics awarded to 2 scientists for discoveries that enabled machine learning


Nobel Prize in physics awarded to 2 scientists for discoveries that enable machine learning
John Hopfield and Geoffrey Hinton, seen in picture, are awarded this year’s Nobel Prize in Physics, which is announced at a press conference by Hans Ellergren, center, permanent secretary at the Swedish Academy of Sciences in Stockholm, Sweden Tuesday Oct. 8, 2024. Credit: Christine Olsson/TT News Agency via AP

John Hopfield and Geoffrey Hinton—who is known as the Godfather of artificial intelligence—were awarded the Nobel Prize in physics Tuesday for discoveries and inventions that formed the building blocks of machine learning and artificial intelligence.

“This year’s two Nobel Laureates in physics have used tools from physics to develop methods that are the foundation of today’s powerful machine learning,” the Nobel committee said in a press release.

Hopfield’s research is carried out at Princeton University and Hinton works at the University of Toronto.

Ellen Moons, a member of the Nobel committee at the Royal Swedish Academy of Sciences, said the two laureates “used fundamental concepts from statistical physics to design artificial neural networks that function as associative memories and find patterns in large data sets.”

She said that such networks have been used to advance research in physics and “have also become part of our daily lives, for instance in facial recognition and language translation.”

While the committee honored the science behind machine learning and artificial intelligence, Moons also mentioned its flipside, saying that “while machine learning has enormous benefits, its rapid development has also raised concerns about our future. Collectively, humans carry the responsibility for using this new technology in a safe and ethical way for the greatest benefit of humankind.”

Hinton shares those concerns. He quit a role at Google so he could more freely speak about the dangers of the technology he helped create.

Nobel Prize in physics awarded to 2 scientists for discoveries that enable machine learning
Artificial intelligence pioneer Geoffrey Hinton speaks at the Collision Conference in Toronto, Wednesday, June 19, 2024. Credit: Chris Young/The Canadian Press via AP, File

On Tuesday, he said he was shocked at the honor.

“I’m flabbergasted. I had, no idea this would happen,” he said when reached by the Nobel committee on the phone.

Hinton said he continues to worry “about a number of possible bad consequences” of his machine learning work, “particularly the threat of these things getting out of control,” but still would do it all over again.

Six days of Nobel announcements opened Monday with Americans Victor Ambros and Gary Ruvkun winning the medicine prize for their discovery of tiny bits of genetic material that serve as on and off switches inside cells that help control what the cells do and when they do it. If scientists can better understand how they work and how to manipulate them, it could one day lead to powerful treatments for diseases like cancer.

Nobel Prize in physics awarded to 2 scientists for discoveries that enable machine learning
Computer scientist Geoffrey Hinton poses at Google’s Mountain View, Calif, headquarters on Wednesday, March 25, 2015. Credit: AP Photo/Noah Berger, File

The physics prize carries a cash award of 11 million Swedish kronor ($1 million) from a bequest left by the award’s creator, Swedish inventor Alfred Nobel. The laureates are invited to receive their awards at ceremonies on Dec. 10, the anniversary of Nobel’s death.

Nobel announcements continue with the chemistry physics prize on Wednesday and literature on Thursday. The Nobel Peace Prize will be announced Friday and the economics award on Oct. 14.






Watch the 2024 Nobel Prize announcement

Nobel committee announcement:

The Royal Swedish Academy of Sciences has decided to award the Nobel Prize in Physics 2024 to

John J. Hopfield, Princeton University, NJ, U.S.

Geoffrey E. Hinton, University of Toronto, Canada

“for foundational discoveries and inventions that enable machine learning with artificial neural networks”

They trained artificial neural networks using physics

This year’s two Nobel Laureates in Physics have used tools from physics to develop methods that are the foundation of today’s powerful machine learning. John Hopfield created an associative memory that can store and reconstruct images and other types of patterns in data. Geoffrey Hinton invented a method that can autonomously find properties in data, and so perform tasks such as identifying specific elements in pictures.

When we talk about artificial intelligence, we often mean machine learning using artificial neural networks. This technology was originally inspired by the structure of the brain. In an artificial neural network, the brain’s neurons are represented by nodes that have different values. These nodes influence each other through connections that can be likened to synapses and which can be made stronger or weaker. The network is trained, for example by developing stronger connections between nodes with simultaneously high values. This year’s laureates have conducted important work with artificial neural networks from the 1980s onward.

John Hopfield invented a network that uses a method for saving and recreating patterns. We can imagine the nodes as pixels. The Hopfield network utilises physics that describes a material’s characteristics due to its atomic spin—a property that makes each atom a tiny magnet. The network as a whole is described in a manner equivalent to the energy in the spin system found in physics, and is trained by finding values for the connections between the nodes so that the saved images have low energy. When the Hopfield network is fed a distorted or incomplete image, it methodically works through the nodes and updates their values so the network’s energy falls. The network thus works stepwise to find the saved image that is most like the imperfect one it was fed with.

Geoffrey Hinton used the Hopfield network as the foundation for a new network that uses a different method: the Boltzmann machine. This can learn to recognise characteristic elements in a given type of data. Hinton used tools from statistical physics, the science of systems built from many similar components. The machine is trained by feeding it examples that are very likely to arise when the machine is run. The Boltzmann machine can be used to classify images or create new examples of the type of pattern on which it was trained. Hinton has built upon this work, helping initiate the current explosive development of machine learning.

“The laureates’ work has already been of the greatest benefit. In physics we use artificial neural networks in a vast range of areas, such as developing new materials with specific properties,” says Ellen Moons, Chair of the Nobel Committee for Physics.

The Nobel Prize in Physics 2024

This year’s laureates used tools from physics to construct methods that helped lay the foundation for today’s powerful machine learning. John Hopfield created a structure that can store and reconstruct information. Geoffrey Hinton invented a method that can independently discover properties in data and which has become important for the large artificial neural networks now in use.

They used physics to find patterns in information

Many people have experienced how computers can translate between languages, interpret images and even conduct reasonable conversations. What is perhaps less well known is that this type of technology has long been important for research, including the sorting and analysis of vast amounts of data. The development of machine learning has exploded over the past fifteen to twenty years and utilises a structure called an artificial neural network. Nowadays, when we talk about artificial intelligence, this is often the type of technology we mean.

Although computers cannot think, machines can now mimic functions such as memory and learning. This year’s laureates in physics have helped make this possible. Using fundamental concepts and methods from physics, they have developed technologies that use structures in networks to process information.

Machine learning differs from traditional software, which works like a type of recipe. The software receives data, which is processed according to a clear description and produces the results, much like when someone collects ingredients and processes them by following a recipe, producing a cake. Instead of this, in machine learning the computer learns by example, enabling it to tackle problems that are too vague and complicated to be managed by step by step instructions. One example is interpreting a picture to identify the objects in it.

Mimics the brain

An artificial neural network processes information using the entire network structure. The inspiration initially came from the desire to understand how the brain works. In the 1940s, researchers had started to reason around the mathematics that underlies the brain’s network of neurons and synapses. Another piece of the puzzle came from psychology, thanks to neuroscientist Donald Hebb’s hypothesis about how learning occurs because connections between neurons are reinforced when they work together.

Later, these ideas were followed by attempts to recreate how the brain’s network functions by building artificial neural networks as computer simulations. In these, the brain’s neurons are mimicked by nodes that are given different values, and the synapses are represented by connections between the nodes that can be made stronger or weaker. Donald Hebb’s hypothesis is still used as one of the basic rules for updating artificial networks through a process called training.

At the end of the 1960s, some discouraging theoretical results caused many researchers to suspect that these neural networks would never be of any real use. However, interest in artificial neural networks was reawakened in the 1980s, when several important ideas made an impact, including work by this year’s laureates.

Associative memory

Imagine that you are trying to remember a fairly unusual word that you rarely use, such as one for that sloping floor often found in cinemas and lecture halls. You search your memory. It’s something like ramp… perhaps rad…ial? No, not that. Rake, that’s it!

This process of searching through similar words to find the right one is reminiscent of the associative memory that the physicist John Hopfield discovered in 1982. The Hopfield network can store patterns and has a method for recreating them. When the network is given an incomplete or slightly distorted pattern, the method can find the stored pattern that is most similar.

Hopfield had previously used his background in physics to explore theoretical problems in molecular biology. When he was invited to a meeting about neuroscience he encountered research into the structure of the brain. He was fascinated by what he learned and started to think about the dynamics of simple neural networks. When neurons act together, they can give rise to new and powerful characteristics that are not apparent to someone who only looks at the network’s separate components.

In 1980, Hopfield left his position at Princeton University, where his research interests had taken him outside the areas in which his colleagues in physics worked, and moved across the continent. He had accepted the offer of a professorship in chemistry and biology at Caltech (California Institute of Technology) in Pasadena, southern California. There, he had access to computer resources that he could use for free experimentation and to develop his ideas about neural networks.

However, he did not abandon his foundation in physics, where he found inspiration for his understanding of how systems with many small components that work together can give rise to new and interesting phenomena. He particularly benefitted from having learned about magnetic materials that have special characteristics thanks to their atomic spin—a property that makes each atom a tiny magnet. The spins of neighbouring atoms affect each other; this can allow domains to form with spin in the same direction. He was able to make a model network with nodes and connections by using the physics that describes how materials develop when spins influence each other.

The network saves images in a landscape

The network that Hopfield built has nodes that are all joined together via connections of different strengths. Each node can store an individual value—in Hopfield’s first work this could either be 0 or 1, like the pixels in a black and white picture.

Hopfield described the overall state of the network with a property that is equivalent to the energy in the spin system found in physics; the energy is calculated using a formula that uses all the values of the nodes and all the strengths of the connections between them. The Hopfield network is programmed by an image being fed to the nodes, which are given the value of black (0) or white (1). The network’s connections are then adjusted using the energy formula, so that the saved image gets low energy. When another pattern is fed into the network, there is a rule for going through the nodes one by one and checking whether the network has lower energy if the value of that node is changed. If it turns out that energy is reduced if a black pixel is white instead, it changes colour. This procedure continues until it is impossible to find any further improvements. When this point is reached, the network has often reproduced the original image on which it was trained.

This may not appear so remarkable if you only save one pattern. Perhaps you are wondering why you don’t just save the image itself and compare it to another image being tested, but Hopfield’s method is special because several pictures can be saved at the same time and the network can usually differentiate between them.

Hopfield likened searching the network for a saved state to rolling a ball through a landscape of peaks and valleys, with friction that slows its movement. If the ball is dropped in a particular location, it will roll into the nearest valley and stop there. If the network is given a pattern that is close to one of the saved patterns it will, in the same way, keep moving forward until it ends up at the bottom of a valley in the energy landscape, thus finding the closest pattern in its memory.

The Hopfield network can be used to recreate data that contains noise or which has been partially erased.

Hopfield and others have continued to develop the details of how the Hopfield network functions, including nodes that can store any value, not just zero or one. If you think about nodes as pixels in a picture, they can have different colours, not just black or white. Improved methods have made it possible to save more pictures and to differentiate between them even when they are quite similar. It is just as possible to identify or reconstruct any information at all, provided it is built from many data points.

Classification using nineteenth-century physics

Remembering an image is one thing, but interpreting what it depicts requires a little more.

Even very young children can point at different animals and confidently say whether it is a dog, a cat, or a squirrel. They might get it wrong occasionally, but fairly soon they are correct almost all the time. A child can learn this even without seeing any diagrams or explanations of concepts such as species or mammal. After encountering a few examples of each type of animal, the different categories fall into place in the child’s head. People learn to recognise a cat, or understand a word, or enter a room and notice that something has changed, by experiencing the environment around them.

When Hopfield published his article on associative memory, Geoffrey Hinton was working at Carnegie Mellon University in Pittsburgh, U.S.. He had previously studied experimental psychology and artificial intelligence in England and Scotland and was wondering whether machines could learn to process patterns in a similar way to humans, finding their own categories for sorting and interpreting information. Along with his colleague, Terrence Sejnowski, Hinton started from the Hopfield network and expanded it to build something new, using ideas from statistical physics.

Statistical physics describes systems that are composed of many similar elements, such as molecules in a gas. It is difficult, or impossible, to track all the separate molecules in the gas, but it is possible to consider them collectively to determine the gas’ overarching properties like pressure or temperature. There are many potential ways for gas molecules to spread through its volume at individual speeds and still result in the same collective properties.

The states in which the individual components can jointly exist can be analysed using statistical physics, and the probability of them occurring calculated. Some states are more probable than others; this depends on the amount of available energy, which is described in an equation by the nineteenth-century physicist Ludwig Boltzmann. Hinton’s network utilised that equation, and the method was published in 1985 under the striking name of the Boltzmann machine.

Recognising new examples of the same type

The Boltzmann machine is commonly used with two different types of nodes. Information is fed to one group, which are called visible nodes. The other nodes form a hidden layer. The hidden nodes’ values and connections also contribute to the energy of the network as a whole.

The machine is run by applying a rule for updating the values of the nodes one at a time. Eventually the machine will enter a state in which the nodes’ pattern can change, but the properties of the network as a whole remain the same. Each possible pattern will then have a specific probability that is determined by the network’s energy according to Boltzmann’s equation. When the machine stops it has created a new pattern, which makes the Boltzmann machine an early example of a generative model.

The Boltzmann machine can learn—not from instructions, but from being given examples. It is trained by updating the values in the network’s connections so that the example patterns, which were fed to the visible nodes when it was trained, have the highest possible probability of occurring when the machine is run. If the same pattern is repeated several times during this training, the probability for this pattern is even higher. Training also affects the probability of outputting new patterns that resemble the examples on which the machine was trained.

A trained Boltzmann machine can recognise familiar traits in information it has not previously seen. Imagine meeting a friend’s sibling, and you can immediately see that they must be related. In a similar way, the Boltzmann machine can recognise an entirely new example if it belongs to a category found in the training material, and differentiate it from material that is dissimilar.

In its original form, the Boltzmann machine is fairly inefficient and takes a long time to find solutions. Things become more interesting when it is developed in various ways, which Hinton has continued to explore. Later versions have been thinned out, as the connections between some of the units have been removed. It turns out that this may make the machine more efficient.

During the 1990s, many researchers lost interest in artificial neural networks, but Hinton was one of those who continued to work in the field. He also helped start the new explosion of exciting results; in 2006 he and his colleagues Simon Osindero, Yee Whye Teh and Ruslan Salakhutdinov developed a method for pretraining a network with a series of Boltzmann machines in layers, one on top of the other. This pretraining gave the connections in the network a better starting point, which optimised its training to recognise elements in pictures.

The Boltzmann machine is often used as part of a larger network. For example, it can be used to recommend films or television series based on the viewer’s preferences.

Machine learning—today and tomorrow

Thanks to their work from the 1980s and onward, John Hopfield and Geoffrey Hinton have helped lay the foundation for the machine learning revolution that started around 2010.

The development we are now witnessing has been made possible through access to the vast amounts of data that can be used to train networks, and through the enormous increase in computing power. Today’s artificial neural networks are often enormous and constructed from many layers. These are called deep neural networks and the way they are trained is called deep learning.

A quick glance at Hopfield’s article on associative memory, from 1982, provides some perspective on this development. In it, he used a network with 30 nodes. If all the nodes are connected to each other, there are 435 connections. The nodes have their values, the connections have different strengths and, in total, there are fewer than 500 parameters to keep track of. He also tried a network with 100 nodes, but this was too complicated, given the computer he was using at the time. We can compare this to the large language models of today, which are built as networks that can contain more than one trillion parameters (one million millions).

Many researchers are now developing machine learning’s areas of application. Which will be the most viable remains to be seen, while there is also wide-ranging discussion on the ethical issues that surround the development and use of this technology.

Because physics has contributed tools for the development of machine learning, it is interesting to see how physics, as a research field, is also benefitting from artificial neural networks. Machine learning has long been used in areas we may be familiar with from previous Nobel Prizes in Physics. These include the use of machine learning to sift through and process the vast amounts of data necessary to discover the Higgs particle. Other applications include reducing noise in measurements of the gravitational waves from colliding black holes, or the search for exoplanets.

In recent years, this technology has also begun to be used when calculating and predicting the properties of molecules and materials—such as calculating protein molecules’ structure, which determines their function, or working out which new versions of a material may have the best properties for use in more efficient solar cells.

More information:
www.nobelprize.org/prizes/phys … dvanced-information/

© 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.

Citation:
Nobel Prize in physics awarded to 2 scientists for discoveries that enabled machine learning (2024, October 8)
retrieved 8 October 2024
from https://phys.org/news/2024-10-nobel-prize-physics-awarded-discoveries.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link