Monday, January 6, 2025
Home Blog Page 1456

Space radiation can damage satellites—next-generation material could self-heal when exposed to cosmic rays

0
Space radiation can damage satellites—next-generation material could self-heal when exposed to cosmic rays


Space radiation can damage satellites—next-generation material could self-heal when exposed to cosmic rays
Dual dose irradiation experiments. Irradiation of the PSC with a NIEL-dominated 0.06 MeV proton beam (red) is followed by irradiation with a 1.0 MeV proton beam (green). By varying the fluence of the two radiation exposures, we selectively demonstrate how IEL participates in partial recovery of the solar cell performance after initial radiation damage. Credit: Nature Communications (2024). DOI: 10.1038/s41467-024-44876-1

The space environment is harsh and full of extreme radiation. Scientists designing spacecraft and satellites need materials that can withstand these conditions.

In a paper published in January 2024 in Nature Communications, my team of materials researchers demonstrated that a next-generation semiconductor material called metal-halide perovskite can actually recover and heal itself from radiation damage.

Metal-halide perovskites are a class of materials discovered in 1839 that are found abundantly in Earth’s crust. They absorb sunlight and efficiently convert it into electricity, making them a potentially good fit for space-based solar panels that can power satellites or future space habitats.

Researchers make perovskites in the form of inks, then coat the inks onto glass plates or plastic, creating thin, filmlike devices that are lightweight and flexible.

Surprisingly, these thin-film solar cells perform as well as conventional silicon solar cells in laboratory demonstrations, even though they are almost 100 times thinner than traditional solar cells.

But these films can degrade if they’re exposed to moisture or oxygen. Researchers and industry are currently working on addressing these stability concerns for terrestrial deployment.






Cosmic rays move through space, and too much exposure can damage satellites and spacecraft.

To test how they might hold up in space, my team developed a radiation experiment. We exposed perovskite solar cells to protons at both low and high energies and found a unique, new property.

The high-energy protons healed the damage caused by the low-energy protons, allowing the device to recover and continue doing its job. The conventional semiconductors used for space electronics do not show this healing.

My team was surprised by this finding. How can a material that degrades when exposed to oxygen and moisture not only resist the harsh radiation of space but also self-heal in an environment that destroys conventional silicon semiconductors?

In our paper, we started to unravel this mystery.

Why it matters

Scientists predict that in the next 10 years, satellite launches into near-Earth orbit will increase exponentially, and space agencies such as NASA aim to establish bases on the moon.

Materials that can tolerate extreme radiation and self-heal would change the game.

Researchers estimate that deploying just a few pounds of perovskite materials into space could generate up to 10,000,000 watts of power. It currently costs about US$4,000 per kilogram ($1,818 per pound) to launch materials into space, so efficient materials are important.

What still isn’t known

Our findings shed light on a remarkable aspect of perovskites—their tolerance to damage and defects. Perovskite crystals are a type of soft material, which means that their atoms can move into different states that scientists call vibrational modes.

Atoms in perovskites are normally arranged in a lattice formation. But radiation can knock the atoms out of position, damaging the material. The vibrations might help reposition the atoms back into place, but we’re still not sure exactly how this process works.

What’s next?

Our findings suggest that soft materials might be uniquely helpful in extreme environments, including space.

But radiation isn’t the only stress that materials have to weather in space. Scientists don’t yet know how perovskites will fare when exposed to vacuum conditions and extreme temperature variations, along with radiation, all at once. Temperature could play a role in the healing behavior my team saw, but we’ll need to conduct more research to determine how.

These results tell us that soft materials could help scientists develop technology that works well in extreme environments. Future research could dive deeper into how the vibrations in these materials relate to any self-healing properties.

More information:
Ahmad R. Kirmani et al, Unraveling radiation damage and healing mechanisms in halide perovskites using energy-tuned dual irradiation dosing, Nature Communications (2024). DOI: 10.1038/s41467-024-44876-1

Provided by
The Conversation


This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
Space radiation can damage satellites—next-generation material could self-heal when exposed to cosmic rays (2024, June 24)
retrieved 25 June 2024
from https://phys.org/news/2024-06-space-satellites-generation-material-exposed.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

China, France launch satellite to better understand the universe

0
China, France launch satellite to better understand the universe


A Long March 2-C rocket carrying a satellite jointly developed by China and France to measure gamma-ray bursts lifts off from a space base in Xichang in southwestern China
A Long March 2-C rocket carrying a satellite jointly developed by China and France to measure gamma-ray bursts lifts off from a space base in Xichang in southwestern China.

A French-Chinese satellite blasted off Saturday on a hunt for the mightiest explosions in the universe, in a notable example of cooperation between a Western power and the Asian giant.

Developed by engineers from both countries, the Space Variable Objects Monitor (SVOM) is carrying four instruments—two French, two Chinese—that will seek out gamma-ray bursts, the light from which has traveled billions of light years to reach Earth.

The 930-kilogram (2,050-pound) satellite “successfully” took off around 3:00 pm (0700 GMT) aboard a Chinese Long March 2-C rocket from a space base in Xichang, in southwestern Sichuan province, China’s National Space Administration said.

Gamma-ray bursts generally occur after the explosion of huge stars—those more than 20 times as big as the sun—or the fusion of compact stars.

The extremely bright cosmic beams can give off a blast of energy equivalent to more than a billion billion suns.

Observing them is like “looking back in time, as the light from these objects takes a long time to reach us”, Ore Gottlieb, an astrophysicist at the Flatiron Institute’s Center for Astrophysics in New York, told AFP.

‘Several mysteries’

The rays carry traces of the gas clouds and galaxies they pass through on their journey through space—valuable data for better understanding the history and evolution of the universe.

“SVOM has the potential to unravel several mysteries in the field of (gamma-ray bursts), including detecting the most distant GRBs in the universe, which correspond to the earliest GRBs,” Gottlieb said.

The most distant bursts identified to date were produced just 630 million years after the Big Bang—when the universe was in its infancy.

“We are… interested in gamma-ray bursts for their own sake because they are very extreme cosmic explosions which allow us to better understand the death of certain stars,” said Frederic Daigne, an astrophysicist at the Paris Institute of Astrophysics.

“All of this data makes it possible to test the laws of physics with phenomena that are impossible to reproduce in the laboratory on Earth.”

A Long March 2-C rocket carrying a satellite jointly developed by China and France to measure gamma-ray bursts lifts off from a space base in Xichang in China’s southwest
A Long March 2-C rocket carrying a satellite jointly developed by China and France to measure gamma-ray bursts lifts off from a space base in Xichang in China’s southwest.

Once analyzed, the data could help to improve understanding of the composition of space, and the dynamics of gas clouds or other galaxies.

The project stems from a partnership between the French and Chinese space agencies as well as other scientific and technical groups from both nations.

“It’s a great success. We’ve managed to work well with our Chinese colleagues,” Philippe Baptiste, CEO of France’s CNES space agency, told AFP after the launch.

Space cooperation at this level between the West and China is fairly uncommon, especially since the United States banned all collaboration between NASA and Beijing in 2011.

Race against time

“US concerns on technology transfer have inhibited US allies from collaborating with the Chinese very much, but it does happen occasionally,” said Jonathan McDowell, an astronomer at the Harvard-Smithsonian Center for Astrophysics in the United States.

In 2018, China and France jointly launched CFOSAT, an oceanographic satellite mainly used in marine meteorology.

Several European countries have also taken part in China’s Chang’e lunar exploration program.

So while SVOM is “by no means unique”, it remains “significant” in the context of space collaboration between China and the West, said McDowell.

Once in orbit 625 kilometers (388 miles) above the Earth, the satellite will send its data back to observatories.

The main challenge is that gamma-ray bursts are extremely brief, leaving scientists in a race against time to gather information.

Once it detects a burst, SVOM will send an alert to a team on duty around the clock.

Within five minutes, they will have to rev up a network of telescopes on the ground that will align precisely with the axis of the burst’s source to make more detailed observations.

© 2024 AFP

Citation:
China, France launch satellite to better understand the universe (2024, June 22)
retrieved 25 June 2024
from https://phys.org/news/2024-06-china-france-satellite-universe.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

New research shows why you don’t need to be perfect to get the job done

0
New research shows why you don't need to be perfect to get the job done


New research shows why you don't need to be perfect to get the job done
Constructing compact behavioral programs. (A) Top: The space of strategies for solving a task can be large, with many strategies that achieve good enough performance. Bottom: Studying relationships between strategies could provide insight into behavioral variability across animals and tasks. (B) General task setup: An animal makes inferences about hidden properties of the environment to guide actions. (C) Specific task setup: An animal forages from two ports whose reward probabilities change over time. (D) The optimal unconstrained strategy consists of an optimal policy coupled to a Bayesian ideal observer. (E) We formulate a constrained strategy as a small program that uses a limited number of internal states to select actions based on past actions and observations. (F) Each program generates sequences of actions depending on the outcomes of past actions. (G) The optimal unconstrained strategy (D) can be translated into a small program by discretizing the belief update implemented by the ideal Bayesian observer and coupled to the optimal behavioral policy. Top: Optimal belief update. Middle: Belief values can be partitioned into discrete states (filled circles) labeled by the action they specify (blue versus green). The belief update specifies transitions between states, depending on whether a reward was received (solid versus dashed arrows). Bottom: States and transitions represented as a Bayesian program. (H) Top: A 30-state program approximates the Bayesian update in (G) and has two directions of integration that can be interpreted as increasing confidence about either option. Bottom: The two-state Bayesian program, win-stay, lose-go (WSLG), continues taking the same action upon winning (i.e., receiving a reward) and switches actions upon losing (i.e., not receiving a reward). (I) Example behavior produced by the 30-state Bayesian program in (H). Credit: Science Advances (2024). DOI: 10.1126/sciadv.adj4064

When neuroscientists think about the strategy an animal might use to carry out a task—like finding food, hunting prey, or navigating a maze—they often propose a single model that lays out the best way for the animal to accomplish the job.

But in the real world, animals—and humans—may not use the optimal way, which can be resource-intensive. Instead, they use a strategy that’s good enough to do the job but takes a lot less brain power.

In new research appearing in Science Advances, Janelia scientists set out to better understand the possible ways an animal could successfully solve a problem, beyond just the best strategy.

The work shows there is a huge number of ways an animal can accomplish a simple foraging task. It also lays out a theoretical framework for understanding these different strategies, how they relate to each other, and how they solve the same problem differently.

Some of these less-than-perfect options for accomplishing a task work nearly as well as the optimal strategy but with a lot less effort, the researchers found, freeing up animals to use precious resources to handle multiple tasks.

“As soon as you release yourself from being perfect, you would be surprised just how many ways there are to solve a problem,” says Tzuhsuan Ma, a postdoc in the Hermundstad Lab, who led the research.

The new framework could help researchers start examining these “good enough” strategies, including why different individuals might adapt different strategies, how these strategies might work together, and how generalizable the strategies are to other tasks. That could help explain how the brain enables behavior in the real world.

“Many of these strategies are ones we would have never dreamed up as possible ways of solving this task, but they do work well, so it’s entirely possible that animals could also be using them,” says Janelia Group Leader Ann Hermundstad. “They give us a new vocabulary for understanding behavior.”

Looking beyond perfection

The research began three years ago when Ma started wondering about the different strategies an animal could possibly use to accomplish a simple but common task: choosing between two options where the chance of being rewarded changes over time.

The researchers were interested in examining a group of strategies that fall between optimal and completely random solutions: “small programs” that are resource-limited but still get the job done. Each program specifies a different algorithm for guiding an animal’s actions based on past observations, allowing it to serve as a model of animal behavior.

As it turns out, there are many such programs—about a quarter of a million. To make sense of these strategies, the researchers first looked at a handful of the top-performing ones. Surprisingly, they found they were essentially doing the same thing as the optimal strategy, despite using fewer resources.

“We were a little disappointed,” Ma says. “We spent all this time searching for these small programs, and they all follow the same computation that the field already knew how to mathematically derive without all this effort.”

But the researchers were motivated to keep looking—they had a strong intuition that there had to be programs out there that were good but different from the optimal strategy. Once they looked beyond the very best programs, they found what they were looking for: about 4,000 programs that fall into this “good enough” category. And more importantly, more than 90% of them did something new.

They could have stopped there, but a question from a fellow Janelian spurred them on: How could they figure out which strategy an animal was using?

The question prompted the team to dive deep into the behavior of individual programs and develop a systematic approach to thinking about the entire collection of strategies. They first developed a mathematical way to describe the programs’ relationships to each other through a network that connected the different programs. Next, they looked at the behavior described by the strategies, devising an algorithm to reveal how one of these “good enough” programs could evolve from another.

They found that small changes to the optimal program can lead to big changes in behavior while still preserving performance. If some of these new behaviors are also useful in other tasks, it suggests that the same program could be good enough for solving a range of different problems.

“If you are thinking about an animal not being a specialist who is optimized to solve just one problem, but rather a generalist who solves many problems, this really is a new way to study that,” Ma says.

The new work provides a framework for researchers to start thinking beyond single, optimal programs for animal behavior. Now, the team is focused on examining how generalizable the small programs are to other tasks, and designing new experiments to determine which program an animal might be using to carry out a task in real time. They are also working with other researchers at Janelia to test their theoretical framework.

“Ultimately, getting a strong grasp on an animal’s behavior is an essential prerequisite to understanding how the brain solves different types of problems, including some that our best artificial systems only solve inefficiently, if at all,” Hermundstad says. “The key challenge is that animals might be using very different strategies than we might initially assume, and this work is helping us uncover that space of possibilities.”

More information:
Tzuhsuan Ma et al, A vast space of compact strategies for effective decisions, Science Advances (2024). DOI: 10.1126/sciadv.adj4064

Citation:
New research shows why you don’t need to be perfect to get the job done (2024, June 24)
retrieved 25 June 2024
from https://phys.org/news/2024-06-dont-job.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

AI ‘gold rush’ for chatbot training data could run out of human-written text

0
AI 'gold rush' for chatbot training data could run out of human-written text


AI 'gold rush' for chatbot training data could run out of human-written text
Artificial intelligence systems like ChatGPT are gobbling ever-larger collections of human writings they need to get smarter. Credit: AP Digital Embed

Artificial intelligence systems like ChatGPT could soon run out of what keeps making them smarter—the tens of trillions of words people have written and shared online.

A new study released Thursday by research group Epoch AI projects that tech companies will exhaust the supply of publicly available training data for AI language models by roughly the turn of the decade—sometime between 2026 and 2032.

Comparing it to a “literal gold rush” that depletes finite natural resources, Tamay Besiroglu, an author of the study, said the AI field might face challenges in maintaining its current pace of progress once it drains the reserves of human-generated writing.

In the short term, tech companies like ChatGPT-maker OpenAI and Google are racing to secure and sometimes pay for high-quality data sources to train their AI large language models—for instance, by signing deals to tap into the steady flow of sentences coming out of Reddit forums and news media outlets.

In the longer term, there won’t be enough new blogs, news articles and social media commentary to sustain the current trajectory of AI development, putting pressure on companies to tap into sensitive data now considered private—such as emails or text messages—or relying on less-reliable “synthetic data” spit out by the chatbots themselves.

“There is a serious bottleneck here,” Besiroglu said. “If you start hitting those constraints about how much data you have, then you can’t really scale up your models efficiently anymore. And scaling up models has been probably the most important way of expanding their capabilities and improving the quality of their output.”

The researchers first made their projections two years ago—shortly before ChatGPT’s debut—in a working paper that forecast a more imminent 2026 cutoff of high-quality text data. Much has changed since then, including new techniques that enabled AI researchers to make better use of the data they already have and sometimes “overtrain” on the same sources multiple times.

But there are limits, and after further research, Epoch now foresees running out of public text data sometime in the next two to eight years.

The team’s latest study is peer-reviewed and due to be presented at this summer’s International Conference on Machine Learning in Vienna, Austria. Epoch is a nonprofit institute hosted by San Francisco-based Rethink Priorities and funded by proponents of effective altruism—a philanthropic movement that has poured money into mitigating AI’s worst-case risks.

Besiroglu said AI researchers realized more than a decade ago that aggressively expanding two key ingredients—computing power and vast stores of internet data—could significantly improve the performance of AI systems.

The amount of text data fed into AI language models has been growing about 2.5 times per year, while computing has grown about 4 times per year, according to the Epoch study. Facebook parent company Meta Platforms recently claimed the largest version of their upcoming Llama 3 model—which has not yet been released—has been trained on up to 15 trillion tokens, each of which can represent a piece of a word.

AI 'gold rush' for chatbot training data could run out of human-written text
Traffic on Interstate 35 passes a Microsoft data center on Sept. 5, 2023, in West Des Moines, Iowa. Artificial intelligence systems like ChatGPT could soon run out of what keeps making them smarter — the tens of trillions of words that people have written and shared online. Credit: AP Photo/Charlie Neibergall, File

But how much it’s worth worrying about the data bottleneck is debatable.

“I think it’s important to keep in mind that we don’t necessarily need to train larger and larger models,” said Nicolas Papernot, an assistant professor of computer engineering at the University of Toronto and researcher at the nonprofit Vector Institute for Artificial Intelligence.

Papernot, who was not involved in the Epoch study, said building more skilled AI systems can also come from training models that are more specialized for specific tasks. But he has concerns about training generative AI systems on the same outputs they’re producing, leading to degraded performance known as “model collapse.”

Training on AI-generated data is “like what happens when you photocopy a piece of paper and then you photocopy the photocopy. You lose some of the information,” Papernot said. Not only that, but Papernot’s research has also found it can further encode the mistakes, bias and unfairness that’s already baked into the information ecosystem.

If real human-crafted sentences remain a critical AI data source, those who are stewards of the most sought-after troves—websites like Reddit and Wikipedia, as well as news and book publishers—have been forced to think hard about how they’re being used.

“Maybe you don’t lop off the tops of every mountain,” jokes Selena Deckelmann, chief product and technology officer at the Wikimedia Foundation, which runs Wikipedia. “It’s an interesting problem right now that we’re having natural resource conversations about human-created data. I shouldn’t laugh about it, but I do find it kind of amazing.”

While some have sought to close off their data from AI training—often after it’s already been taken without compensation—Wikipedia has placed few restrictions on how AI companies use its volunteer-written entries. Still, Deckelmann said she hopes there continue to be incentives for people to keep contributing, especially as a flood of cheap and automatically generated “garbage content” starts polluting the internet.

AI companies should be “concerned about how human-generated content continues to exist and continues to be accessible,” she said.

From the perspective of AI developers, Epoch’s study says paying millions of humans to generate the text that AI models will need “is unlikely to be an economical way” to drive better technical performance.

As OpenAI begins work on training the next generation of its GPT large language models, CEO Sam Altman told the audience at a United Nations event last month that the company has already experimented with “generating lots of synthetic data” for training.

“I think what you need is high-quality data. There is low-quality synthetic data. There’s low-quality human data,” Altman said. But he also expressed reservations about relying too heavily on synthetic data over other technical methods to improve AI models.

“There’d be something very strange if the best way to train a model was to just generate, like, a quadrillion tokens of synthetic data and feed that back in,” Altman said. “Somehow that seems inefficient.”

More information:
Pablo Villalobos et al, Will we run out of data? Limits of LLM scaling based on human-generated data, arXiv (2022). DOI: 10.48550/arxiv.2211.04325

Journal information:
arXiv


© 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.

Citation:
AI ‘gold rush’ for chatbot training data could run out of human-written text (2024, June 6)
retrieved 25 June 2024
from https://techxplore.com/news/2024-06-ai-gold-chatbot-human-written.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Our smart devices will soon be smarter

0
Our smart devices will soon be smarter


apple ios
Credit: Pixabay/CC0 Public Domain

Our smart devices take voice commands from us, check our heartbeats, track our sleep, translate text, send us reminders, capture photos and movies, and let us talk to family and friends continents away.

Now imagine turbocharging those capabilities. Holding in-depth, natural language exchanges on academic or personal queries; running our vital signs through a global database to check on imminent health issues; packing massive databases to provide comprehensive real-time translation among two or more parties speaking different languages; and conversing with GPS software providing details on the best burgers, movies, hotels or people-watching spots trending along your route.

Tapping into the seductive power of large language models and natural language processing, we’ve witnessed tremendous progress in communications between us and technology that we increasingly rely on in our daily lives.

But there’s been a stumbling block when it comes to AI and our portable devices. Researchers at Apple say they are ready to do something about it.

The issue is memory. Large language models need lots of it. With models demanding storage of potentially hundreds of billions of parameters, commonly used smartphones such as Apple’s iPhone 15 with a scant 8GB of memory will fall far short of the task.

In a paper uploaded to the pre-print server arXiv on Dec. 12, Apple announced it had developed a method that utilizes transfers of data between flash memory and DRAM that will allow a smart device to run a powerful AI system.

The researchers say their process can run AI programs twice the size of a device’s DRAM capacity and speed up CPU operations by up to 500%. GPU processes, they say, can be sped up to 25 times current approaches.

“Our method involves constructing an inference cost model that harmonizes with the flash memory behavior, guiding us to optimize in two critical areas: reducing the volume of data transferred from flash and reading data in larger, more contiguous chunks,” the researchers said in their paper titled, “LLM in a flash: Efficient Large Language Model Inference with Limited Memory.”

The two techniques they used were:

  1. Windowing, which slashes the amount of data that needs to be exchanged between flash memory and RAM. This is accomplished by reusing results from recent calculations, minimizing IO requests and saving energy and time.
  2. Row column bundling, which achieves greater efficiency by digesting larger chunks of data at a time from flash memory.

The two processes, say the researchers, “collectively contribute to a significant reduction in the data load and an increase in the efficiency of memory usage.”

They added, “This breakthrough is particularly crucial for deploying advanced LLMs in resource-limited environments, thereby expanding their applicability and accessibility.”

In another recent breakthrough, Apple announced that it had designed a program called HUGS that can create animated avatars from just a few seconds worth of video captured from a single lens. Current avatar creation programs require multiple camera views. The report, “HUGS: Human Gaussian Splats,” was uploaded to arXiv Nov. 29.

Their program can create realistic dancing avatars in as little as 30 minutes, far shorter than the two days required for current popular approaches, according to Apple.

More information:
Keivan Alizadeh et al, LLM in a flash: Efficient Large Language Model Inference with Limited Memory, arXiv (2023). DOI: 10.48550/arxiv.2312.11514

Journal information:
arXiv


© 2023 Science X Network

Citation:
Apple flash: Our smart devices will soon be smarter (2023, December 28)
retrieved 25 June 2024
from https://techxplore.com/news/2023-12-apple-smart-devices-smarter.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link