Family conditions—specifically, how similar one’s social status and background is to one’s parents’ status—may play a bigger role in determining how easily an individual can shift into a wealthier socioeconomic class than gender inequality, according to a study of 153 countries published June 20, 2024 in the open-access journal PLOS ONE by Khanh Duong from Maynooth University, Ireland.
As global inequality increases, researchers have found that countries with higher levels of income inequality tend to experience lower rates of class mobility (in other words, individuals in a lower socioeconomic class find it more difficult to move into a wealthier class).
In this study, Duong analyzed how education, gender inequality, and family conditions (specifically, how similar children are to their parents, also known in this context as parental dependency) interact and affect class mobility. He used data from the Global Database on Intergenerational Mobility for 153 countries worldwide (of which 115 are classified as “developing economies”), further split into generational cohorts for each decade from the 1940s–1980s to build his model.
Duong’s preliminary analyses showed a positive relationship between education expansion and mobility, and a negative relationship between education inequality and mobility. Parental dependency showed only a weak positive correlation with mobility. However, following the application of estimation techniques to address confounding issues between parental dependency and other factors, the final model showed that parental dependency had the largest, negative effect on upward social mobility (with an effect size of 0.1).
Though increases in education promoted social mobility, the model showed this was a weak effect and was potentially ineffective when parental dependency existed at a high level. His model also showed that the gender inequality effect on mobility (as seen in the outcomes of families with daughters and sons) was significantly smaller (effect size of 0.005) than the parental dependency effect, although still present.
Duong suggests that policymakers promoting social mobility should focus on shifting traditions such as “like father, like son.”
He adds, “The study shows that while gender inequality in intergenerational mobility persists, it has significantly decreased across generations and is less important than parental influence. Thus, reassessing the roles of parental influence and gender bias is necessary, as the former is currently underestimated and the latter overemphasized.”
Citation:
Family conditions may have more of an impact on upward social mobility than gender inequality (2024, June 20)
retrieved 25 June 2024
from https://phys.org/news/2024-06-family-conditions-impact-upward-social.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
The space environment is harsh and full of extreme radiation. Scientists designing spacecraft and satellites need materials that can withstand these conditions.
Metal-halide perovskites are a class of materials discovered in 1839 that are found abundantly in Earth’s crust. They absorb sunlight and efficiently convert it into electricity, making them a potentially good fit for space-based solar panels that can power satellites or future space habitats.
Researchers make perovskites in the form of inks, then coat the inks onto glass plates or plastic, creating thin, filmlike devices that are lightweight and flexible.
Surprisingly, these thin-film solar cells perform as well as conventional silicon solar cells in laboratory demonstrations, even though they are almost 100 times thinner than traditional solar cells.
But these films can degrade if they’re exposed to moisture or oxygen. Researchers and industry are currently working on addressing these stability concerns for terrestrial deployment.
The high-energy protons healed the damage caused by the low-energy protons, allowing the device to recover and continue doing its job. The conventional semiconductors used for space electronics do not show this healing.
My team was surprised by this finding. How can a material that degrades when exposed to oxygen and moisture not only resist the harsh radiation of space but also self-heal in an environment that destroys conventional silicon semiconductors?
In our paper, we started to unravel this mystery.
Why it matters
Scientists predict that in the next 10 years, satellite launches into near-Earth orbit will increase exponentially, and space agencies such as NASA aim to establish bases on the moon.
Materials that can tolerate extreme radiation and self-heal would change the game.
Researchers estimate that deploying just a few pounds of perovskite materials into space could generate up to 10,000,000 watts of power. It currently costs about US$4,000 per kilogram ($1,818 per pound) to launch materials into space, so efficient materials are important.
What still isn’t known
Our findings shed light on a remarkable aspect of perovskites—their tolerance to damage and defects. Perovskite crystals are a type of soft material, which means that their atoms can move into different states that scientists call vibrational modes.
Atoms in perovskites are normally arranged in a lattice formation. But radiation can knock the atoms out of position, damaging the material. The vibrations might help reposition the atoms back into place, but we’re still not sure exactly how this process works.
What’s next?
Our findings suggest that soft materials might be uniquely helpful in extreme environments, including space.
But radiation isn’t the only stress that materials have to weather in space. Scientists don’t yet know how perovskites will fare when exposed to vacuum conditions and extreme temperature variations, along with radiation, all at once. Temperature could play a role in the healing behavior my team saw, but we’ll need to conduct more research to determine how.
These results tell us that soft materials could help scientists develop technology that works well in extreme environments. Future research could dive deeper into how the vibrations in these materials relate to any self-healing properties.
More information:
Ahmad R. Kirmani et al, Unraveling radiation damage and healing mechanisms in halide perovskites using energy-tuned dual irradiation dosing, Nature Communications (2024). DOI: 10.1038/s41467-024-44876-1
Citation:
Space radiation can damage satellites—next-generation material could self-heal when exposed to cosmic rays (2024, June 24)
retrieved 25 June 2024
from https://phys.org/news/2024-06-space-satellites-generation-material-exposed.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
A French-Chinese satellite blasted off Saturday on a hunt for the mightiest explosions in the universe, in a notable example of cooperation between a Western power and the Asian giant.
Developed by engineers from both countries, the Space Variable Objects Monitor (SVOM) is carrying four instruments—two French, two Chinese—that will seek out gamma-ray bursts, the light from which has traveled billions of light years to reach Earth.
The 930-kilogram (2,050-pound) satellite “successfully” took off around 3:00 pm (0700 GMT) aboard a Chinese Long March 2-C rocket from a space base in Xichang, in southwestern Sichuan province, China’s National Space Administration said.
Gamma-ray bursts generally occur after the explosion of huge stars—those more than 20 times as big as the sun—or the fusion of compact stars.
The extremely bright cosmic beams can give off a blast of energy equivalent to more than a billion billion suns.
Observing them is like “looking back in time, as the light from these objects takes a long time to reach us”, Ore Gottlieb, an astrophysicist at the Flatiron Institute’s Center for Astrophysics in New York, told AFP.
‘Several mysteries’
The rays carry traces of the gas clouds and galaxies they pass through on their journey through space—valuable data for better understanding the history and evolution of the universe.
“SVOM has the potential to unravel several mysteries in the field of (gamma-ray bursts), including detecting the most distant GRBs in the universe, which correspond to the earliest GRBs,” Gottlieb said.
The most distant bursts identified to date were produced just 630 million years after the Big Bang—when the universe was in its infancy.
“We are… interested in gamma-ray bursts for their own sake because they are very extreme cosmic explosions which allow us to better understand the death of certain stars,” said Frederic Daigne, an astrophysicist at the Paris Institute of Astrophysics.
“All of this data makes it possible to test the laws of physics with phenomena that are impossible to reproduce in the laboratory on Earth.”
Once analyzed, the data could help to improve understanding of the composition of space, and the dynamics of gas clouds or other galaxies.
The project stems from a partnership between the French and Chinese space agencies as well as other scientific and technical groups from both nations.
“It’s a great success. We’ve managed to work well with our Chinese colleagues,” Philippe Baptiste, CEO of France’s CNES space agency, told AFP after the launch.
Space cooperation at this level between the West and China is fairly uncommon, especially since the United States banned all collaboration between NASA and Beijing in 2011.
Race against time
“US concerns on technology transfer have inhibited US allies from collaborating with the Chinese very much, but it does happen occasionally,” said Jonathan McDowell, an astronomer at the Harvard-Smithsonian Center for Astrophysics in the United States.
In 2018, China and France jointly launched CFOSAT, an oceanographic satellite mainly used in marine meteorology.
Several European countries have also taken part in China’s Chang’e lunar exploration program.
So while SVOM is “by no means unique”, it remains “significant” in the context of space collaboration between China and the West, said McDowell.
Once in orbit 625 kilometers (388 miles) above the Earth, the satellite will send its data back to observatories.
The main challenge is that gamma-ray bursts are extremely brief, leaving scientists in a race against time to gather information.
Once it detects a burst, SVOM will send an alert to a team on duty around the clock.
Within five minutes, they will have to rev up a network of telescopes on the ground that will align precisely with the axis of the burst’s source to make more detailed observations.
Citation:
China, France launch satellite to better understand the universe (2024, June 22)
retrieved 25 June 2024
from https://phys.org/news/2024-06-china-france-satellite-universe.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
When neuroscientists think about the strategy an animal might use to carry out a task—like finding food, hunting prey, or navigating a maze—they often propose a single model that lays out the best way for the animal to accomplish the job.
But in the real world, animals—and humans—may not use the optimal way, which can be resource-intensive. Instead, they use a strategy that’s good enough to do the job but takes a lot less brain power.
In new research appearing in Science Advances, Janelia scientists set out to better understand the possible ways an animal could successfully solve a problem, beyond just the best strategy.
The work shows there is a huge number of ways an animal can accomplish a simple foraging task. It also lays out a theoretical framework for understanding these different strategies, how they relate to each other, and how they solve the same problem differently.
Some of these less-than-perfect options for accomplishing a task work nearly as well as the optimal strategy but with a lot less effort, the researchers found, freeing up animals to use precious resources to handle multiple tasks.
“As soon as you release yourself from being perfect, you would be surprised just how many ways there are to solve a problem,” says Tzuhsuan Ma, a postdoc in the Hermundstad Lab, who led the research.
The new framework could help researchers start examining these “good enough” strategies, including why different individuals might adapt different strategies, how these strategies might work together, and how generalizable the strategies are to other tasks. That could help explain how the brain enables behavior in the real world.
“Many of these strategies are ones we would have never dreamed up as possible ways of solving this task, but they do work well, so it’s entirely possible that animals could also be using them,” says Janelia Group Leader Ann Hermundstad. “They give us a new vocabulary for understanding behavior.”
Looking beyond perfection
The research began three years ago when Ma started wondering about the different strategies an animal could possibly use to accomplish a simple but common task: choosing between two options where the chance of being rewarded changes over time.
The researchers were interested in examining a group of strategies that fall between optimal and completely random solutions: “small programs” that are resource-limited but still get the job done. Each program specifies a different algorithm for guiding an animal’s actions based on past observations, allowing it to serve as a model of animal behavior.
As it turns out, there are many such programs—about a quarter of a million. To make sense of these strategies, the researchers first looked at a handful of the top-performing ones. Surprisingly, they found they were essentially doing the same thing as the optimal strategy, despite using fewer resources.
“We were a little disappointed,” Ma says. “We spent all this time searching for these small programs, and they all follow the same computation that the field already knew how to mathematically derive without all this effort.”
But the researchers were motivated to keep looking—they had a strong intuition that there had to be programs out there that were good but different from the optimal strategy. Once they looked beyond the very best programs, they found what they were looking for: about 4,000 programs that fall into this “good enough” category. And more importantly, more than 90% of them did something new.
They could have stopped there, but a question from a fellow Janelian spurred them on: How could they figure out which strategy an animal was using?
The question prompted the team to dive deep into the behavior of individual programs and develop a systematic approach to thinking about the entire collection of strategies. They first developed a mathematical way to describe the programs’ relationships to each other through a network that connected the different programs. Next, they looked at the behavior described by the strategies, devising an algorithm to reveal how one of these “good enough” programs could evolve from another.
They found that small changes to the optimal program can lead to big changes in behavior while still preserving performance. If some of these new behaviors are also useful in other tasks, it suggests that the same program could be good enough for solving a range of different problems.
“If you are thinking about an animal not being a specialist who is optimized to solve just one problem, but rather a generalist who solves many problems, this really is a new way to study that,” Ma says.
The new work provides a framework for researchers to start thinking beyond single, optimal programs for animal behavior. Now, the team is focused on examining how generalizable the small programs are to other tasks, and designing new experiments to determine which program an animal might be using to carry out a task in real time. They are also working with other researchers at Janelia to test their theoretical framework.
“Ultimately, getting a strong grasp on an animal’s behavior is an essential prerequisite to understanding how the brain solves different types of problems, including some that our best artificial systems only solve inefficiently, if at all,” Hermundstad says. “The key challenge is that animals might be using very different strategies than we might initially assume, and this work is helping us uncover that space of possibilities.”
More information:
Tzuhsuan Ma et al, A vast space of compact strategies for effective decisions, Science Advances (2024). DOI: 10.1126/sciadv.adj4064
Citation:
New research shows why you don’t need to be perfect to get the job done (2024, June 24)
retrieved 25 June 2024
from https://phys.org/news/2024-06-dont-job.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Artificial intelligence systems like ChatGPT could soon run out of what keeps making them smarter—the tens of trillions of words people have written and shared online.
A new study released Thursday by research group Epoch AI projects that tech companies will exhaust the supply of publicly available training data for AI language models by roughly the turn of the decade—sometime between 2026 and 2032.
Comparing it to a “literal gold rush” that depletes finite natural resources, Tamay Besiroglu, an author of the study, said the AI field might face challenges in maintaining its current pace of progress once it drains the reserves of human-generated writing.
In the short term, tech companies like ChatGPT-maker OpenAI and Google are racing to secure and sometimes pay for high-quality data sources to train their AI large language models—for instance, by signing deals to tap into the steady flow of sentences coming out of Reddit forums and news media outlets.
In the longer term, there won’t be enough new blogs, news articles and social media commentary to sustain the current trajectory of AI development, putting pressure on companies to tap into sensitive data now considered private—such as emails or text messages—or relying on less-reliable “synthetic data” spit out by the chatbots themselves.
“There is a serious bottleneck here,” Besiroglu said. “If you start hitting those constraints about how much data you have, then you can’t really scale up your models efficiently anymore. And scaling up models has been probably the most important way of expanding their capabilities and improving the quality of their output.”
The researchers first made their projections two years ago—shortly before ChatGPT’s debut—in a working paper that forecast a more imminent 2026 cutoff of high-quality text data. Much has changed since then, including new techniques that enabled AI researchers to make better use of the data they already have and sometimes “overtrain” on the same sources multiple times.
But there are limits, and after further research, Epoch now foresees running out of public text data sometime in the next two to eight years.
The team’s latest study is peer-reviewed and due to be presented at this summer’s International Conference on Machine Learning in Vienna, Austria. Epoch is a nonprofit institute hosted by San Francisco-based Rethink Priorities and funded by proponents of effective altruism—a philanthropic movement that has poured money into mitigating AI’s worst-case risks.
Besiroglu said AI researchers realized more than a decade ago that aggressively expanding two key ingredients—computing power and vast stores of internet data—could significantly improve the performance of AI systems.
The amount of text data fed into AI language models has been growing about 2.5 times per year, while computing has grown about 4 times per year, according to the Epoch study. Facebook parent company Meta Platforms recently claimed the largest version of their upcoming Llama 3 model—which has not yet been released—has been trained on up to 15 trillion tokens, each of which can represent a piece of a word.
But how much it’s worth worrying about the data bottleneck is debatable.
“I think it’s important to keep in mind that we don’t necessarily need to train larger and larger models,” said Nicolas Papernot, an assistant professor of computer engineering at the University of Toronto and researcher at the nonprofit Vector Institute for Artificial Intelligence.
Papernot, who was not involved in the Epoch study, said building more skilled AI systems can also come from training models that are more specialized for specific tasks. But he has concerns about training generative AI systems on the same outputs they’re producing, leading to degraded performance known as “model collapse.”
Training on AI-generated data is “like what happens when you photocopy a piece of paper and then you photocopy the photocopy. You lose some of the information,” Papernot said. Not only that, but Papernot’s research has also found it can further encode the mistakes, bias and unfairness that’s already baked into the information ecosystem.
If real human-crafted sentences remain a critical AI data source, those who are stewards of the most sought-after troves—websites like Reddit and Wikipedia, as well as news and book publishers—have been forced to think hard about how they’re being used.
“Maybe you don’t lop off the tops of every mountain,” jokes Selena Deckelmann, chief product and technology officer at the Wikimedia Foundation, which runs Wikipedia. “It’s an interesting problem right now that we’re having natural resource conversations about human-created data. I shouldn’t laugh about it, but I do find it kind of amazing.”
While some have sought to close off their data from AI training—often after it’s already been taken without compensation—Wikipedia has placed few restrictions on how AI companies use its volunteer-written entries. Still, Deckelmann said she hopes there continue to be incentives for people to keep contributing, especially as a flood of cheap and automatically generated “garbage content” starts polluting the internet.
AI companies should be “concerned about how human-generated content continues to exist and continues to be accessible,” she said.
From the perspective of AI developers, Epoch’s study says paying millions of humans to generate the text that AI models will need “is unlikely to be an economical way” to drive better technical performance.
As OpenAI begins work on training the next generation of its GPT large language models, CEO Sam Altman told the audience at a United Nations event last month that the company has already experimented with “generating lots of synthetic data” for training.
“I think what you need is high-quality data. There is low-quality synthetic data. There’s low-quality human data,” Altman said. But he also expressed reservations about relying too heavily on synthetic data over other technical methods to improve AI models.
“There’d be something very strange if the best way to train a model was to just generate, like, a quadrillion tokens of synthetic data and feed that back in,” Altman said. “Somehow that seems inefficient.”
More information:
Pablo Villalobos et al, Will we run out of data? Limits of LLM scaling based on human-generated data, arXiv (2022). DOI: 10.48550/arxiv.2211.04325
Citation:
AI ‘gold rush’ for chatbot training data could run out of human-written text (2024, June 6)
retrieved 25 June 2024
from https://techxplore.com/news/2024-06-ai-gold-chatbot-human-written.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.