Friday, March 14, 2025
Home Blog Page 1407

Why the law needs to protect animals from AI

0
Why the law needs to protect animals from AI


farm animals
Credit: Unsplash/CC0 Public Domain

The rise of artificial intelligence (AI) has triggered concern about potentially detrimental effects on humans. However, the technology also has the potential to harm animals.

An important policy reform now underway in Australia offers an opportunity to address this. The federal government has committed A$5 million to renewing the lapsed Australian Animal Welfare Strategy. Consultation has begun, and the final strategy is expected in 2027.

While AI is not an explicit focus of the review, it should be.

Australians care about animals. The strategy could help ensure decision-makers protect animals from AI’s harm in our homes, on farms and in the wild.

Will AI harms to animals go unchecked?

Computers are now so developed they can perform some complex tasks as well as, or better than, humans. In other words, they have developed a degree of “artificial intelligence“.

The technology is exciting but also risky.

Warnings about the risks to humans include everything from privacy concerns to the collapse of human civilization.

Policy-makers in the European Union, the United States and Australia are scrambling to address these issues and ensure AI is safe and used responsibly. But the focus of these policies is to protect humans.

Now, Australia has a chance to protect animals from AI.

Australia’s previous Animal Welfare Strategy expired in 2014. It’s now being revived, and aims to provide a national approach to animal welfare.

So far, documents released as part of the review suggest AI is not being considered under the strategy. That is a serious omission, for reasons we outline below.

Powerful and pervasive technology in use

Much AI use benefits animals, such as in veterinary medicine. For example, it may soon help your vet read X-rays of your animal companion.

AI is being developed to detect pain in cats and dogs. This might help if the technology is accurate, but could cause harm if it’s inaccurate by either over-reporting pain or failing to detect discomfort.

AI may also allow humans to decipher animal communication and better understand animals’ point of view, such as interpreting whale song.

It has also been used to discover which trees and artificial structures are best for birds.

But when it comes to animals, research suggests AI may also be used to harm them.

For example, it may be used by poachers and illegal wildlife traders to track and kill or capture endangered species. And AI-powered algorithms used by social media platforms can connect crime gangs to customers, perpetuating the illegal wildlife trade.

AI is known to produce racial, gender and other biases in relation to humans. It can also produce biased information and opinions about animals.

For example, AI chatbots may perpetuate negative attitudes about animals in their training data—perhaps suggesting their purpose is to be hunted or eaten.

There are plans to use AI to distinguish cats from native species and then kill the cats. Yet, AI image recognition tools have not been sufficiently trained to accurately identify many wild species. They are biased towards North American species, because that is where the bulk of the data and training comes from.

Algorithms using AI tend to promote more salacious content, so they are likely to also recommend animal cruelty videos on various platforms. For example, YouTube contains content involving horrific animal abuse.

Some AI technologies are used in harmful animal experiments. Elon Musk’s brain implant company Neuralink, for instance, was accused of rushing experiments that harmed and killed monkeys.

Researchers warn AI could estrange humans from animals and cause us to care less about them. Imagine AI farms almost entirely run by smart systems that “look after” the animals. This would reduce opportunities for humans to notice and respond to animal needs.

Existing regulatory frameworks are inadequate

Australia’s animal welfare laws are already flawed and fail to address existing harms. They allow some animals to be confined to very small spaces, such as chickens in battery cages or pigs in sow stalls and farrowing crates. Painful procedures (such as mulesing, tail docking and beak trimming) can be legally performed without pain relief.

Only widespread community outrage forces governments to end the most controversial practices, such as the export of live sheep by sea.

This has implications for the development and use of artificial intelligence. Reform is needed to ensure AI does not amplify these existing animal harms, or contribute to new ones.

Internationally, some governments are responding to the need for reform.

The United Kingdom’s online safety laws now require social media platforms to proactively monitor and remove illegal animal cruelty content from their platforms. In Brazil, Meta (the owner of Facebook and WhatsApp) was recently fined for not taking down posts that had been tagged as illegal wildlife trading.

The EU’s new AI Act also takes a small step towards recognizing how the technology affects the environment we share with other animals.

Among other aims, the law encourages the AI industry to track and minimize the carbon and other environmental impact of AI systems. This would benefit animal as well as human health.

The current refresh of the Australian Animal Welfare Strategy, jointly led by federal, state and territory governments, gives us a chance to respond to the AI threat. It should be updated to consider how AI affects animal interests.

Provided by
The Conversation


This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
Animals in the machine: Why the law needs to protect animals from AI (2024, October 1)
retrieved 1 October 2024
from https://phys.org/news/2024-10-animals-machine-law-ai.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

A modular neutrino detector years in the making

0
A modular neutrino detector years in the making


A modular neutrino detector years in the making
Kevin Wood, a Chamberlain Postdoctoral Fellow at Berkeley Lab and run coordinator for the 2×2 prototype and Brooke Russell, now the Neil and Jane Pappalardo Special Fellow in Physics at MIT and the 2×2 prototype’s charge readout expert, examine the 2×2 prototype detector. Credit: Dan Svoboda/Fermilab

Researchers at the U.S. Department of Energy’s SLAC National Accelerator Laboratory have joined collaborators from around the world to build a prototype neutrino detector that has now captured its first neutrino interactions at Fermi National Accelerator Laboratory (Fermilab).

The prototype detector will help fine-tune a full-size version of the DUNE Near Detector Liquid Argon (ND-LAr) detector in the coming years for the international Deep Underground Neutrino Experiment (DUNE), led by Fermilab, and in the meantime help illuminate some specific neutrino properties.

Researchers will also use the detector to test advanced machine learning techniques, developed at SLAC, that are expected to play a key role in processing the vast amount of data generated by DUNE.

Scientists will also use data from the prototype to study electron neutrinos, which are one of three known neutrino types. Nearly all of the neutrinos that come out of the neutrino beam at Fermilab will be muon neutrinos, but one in about 1,000 will be electron neutrinos.

“DUNE needs to measure the oscillation of muon neutrinos to electron neutrinos by counting both interactions,” Sinclair said. “We know that the interaction probability of electron neutrinos is different from muon neutrinos. The 2×2 will allow us to study and verify the new detector’s capability to identify and study electron neutrino interactions.”

Four sturdy boxes make a novel detector

Although the module system might seem simple, it faces a practical challenge, Tanaka said. It puts a lot more stuff in the way of detecting neutrinos. It fell to longtime SLAC mechanical engineer Knut Skarpaas VIII and his colleagues to design a system that was light, sturdy, and could withstand the very cold temperatures of liquid argon.

Skarpaas worked on many of the components for the TPC modules with collaborators at the University of Bern and Berkeley Lab. When Skarpaas first heard about the prototype detector, he walked up to a chalkboard and sketched a possible design of it. Many years later, the detector looked nearly identical to those initial drawings.

After completing the design, Skarpaas and the team focused on building the prototype’s electrostatic field cages, the boxes that contained all of the detector’s electronic components and the liquid argon. This cage defines the volume of the prototype, and everything has to fit inside of that volume.

Additionally, the team had to squeeze a high-voltage cathode, which guides those ionization electrons toward an anode, into the cage without touching any other metal parts. If metal touched the cathode, this could create an electrical arc, jeopardizing the detector equipment.

Perhaps the most difficult part of the building process was selecting the right power cable. The cable feeds electricity to the high-voltage cathode and makes the whole detector work, and it needs to be straight, cannot touch any other parts and must be able to shrink up to two inches due to the cold temperature inside of the detector. If the cable bends under these cold temperatures, it could shatter.

After many long days inside a machine shop at SLAC, Skarpaas and the team finished assembly and shipped the modules to the University of Bern for testing.

“Putting all of a detector’s pieces together is like being the conductor of an orchestra,” Skarpaas said. “You have to understand what everyone needs for their science goals and then meld these needs together to build the detector.”

Advanced machine learning techniques

DUNE’s primary goal is to explore some of the deepest questions about the composition of the universe by studying neutrino properties. To do this, researchers need to not only capture neutrino interactions, but also make sense of the data generated by these interactions.

In the case of the prototype detector, the data generated by up to thousands of neutrino interactions per day would be impossible for scientists to study manually image by image. Researchers therefore invented new machine learning techniques for this amount of data. Machine learning is a type of artificial intelligence that detects patterns in large datasets, then uses those patterns to make predictions and improve future rounds of analysis.

“By eye, it might be easy to find the information you need in a single image generated by the detector,” SLAC researcher Francois Drielsma said. “But it is difficult to teach a machine to perform this task. Sometimes there is the thought that if something is simple for a human being, it should be simple for a machine. But that is not necessarily true.”

Still, humans aren’t up to scanning millions of images at a time. They’ve also struggled to use traditional programming techniques to help identify objects in detector data, so Drielsma’s group started working on a machine learning technique called neural networks, a type of algorithm loosely modeled after the human brain.

Once a neural network is trained on a large set of data—whether from particle interactions or astronomical images—it can automatically analyze other complex datasets, almost instantaneously and with great precision.

The program is working better each day, and researchers will continue to fine-tune its performance over the coming years while the prototype detector is collecting data.

“It’s going to be a difficult task to train the program to do everything we want accurately,” Drielsma said. “But when things are difficult, they can be really entertaining.”

“The prototype is going to be very important because it’s the only source of neutrino beam data at energies comparable to the DUNE beam that will be available before DUNE is running,” said James Sinclair, a SLAC scientist working on the project. “We are excited to be completing this critical step in the experiment and are now ready to study the data that’s coming in.”

A modular design for an unusual problem

Neutrinos are fundamental particles unlike any other. They can pass through almost all matter largely unseen and can change forms along the way—a phenomenon called neutrino oscillation. Scientists think a better understanding of their unusual properties could help answer some of the most challenging questions about the origin of matter in the universe and the pattern of neutrino masses.

To detect neutrinos, physicists use what’s called a time projection chamber (TPC)—a vast tank of liquified noble gases such as argon. When a particle enters the chamber from outside, two things happen.

First, interactions between the particle and argon atoms create flashes of light called scintillation. Second, the particle can knock electrons free from argon atoms, ionizing them. TPCs typically include photosensitive equipment to detect scintillation and an electric field that guides free electrons to one end of the detector, where—traditionally—a wire mesh picks them up as an electrical current.

By comparing details of the flash with the time it takes electrons to arrive at the mesh, researchers can identify key details including what kinds of particles they’re picking up and how fast those particles are moving.

The idea is to capture as many neutrino interactions as possible with a large volume of argon and a relatively small amount of detector equipment, almost all of which stays on the periphery of that volume.

But something more is needed for DUNE, said SLAC scientist Hiro Tanaka, the technical director for the DUNE near detector and head of SLAC’s efforts on the DUNE project.

Unlike many other neutrino experiments, DUNE will produce a very large number of neutrinos and beam them in bunches toward DUNE’s near detector outside Chicago.

Over the course of just a few microseconds, scientists expect to see multiple neutrino interactions in the near detector. The trouble is, all those interactions make it hard to tell which flash of light belongs to which neutrino, in part because large tanks of liquid argon scatter and diffuse each individual flash.

It also makes it hard to tell which electron comes from which ionization event, since any one electron takes milliseconds to reach the edge of a TPC, during which time many interactions may have occurred.

It was out of these concerns that the newly minted prototype, called the 2×2 detector, was born. On one level, the idea is simple: rather than use one giant TPC, break the device into a set of four TPC modules arranged in a two-by-two grid—hence the name.

Each module actually contains two separate volumes of argon with an opaque wall in the center. That wall effectively creates eight optically separate TPC tanks, so that it’s less likely to mistake one neutrino flash for another. It also serves as the source of the electric field that draws ionization electrons to the sides of the detector module.

In addition, each module contains a new system for detecting ionization electrons developed at DOE’s Lawrence Berkeley National Laboratory that picks up not just when the electrons arrive, but also precisely where, in contrast to the traditional wire-based designs, where the information provided by each plane of wires can be difficult to reconcile in the high interaction-rate environment of the DUNE near detector.

Combined with the light flashes, this will help researchers determine where neutrino interactions occurred for the first time without ambiguity in three dimensions.

Citation:
A modular neutrino detector years in the making (2024, October 1)
retrieved 1 October 2024
from https://phys.org/news/2024-10-modular-neutrino-detector-years.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Using artificial intelligence to reduce risks to critical mineral supply

0
Using artificial intelligence to reduce risks to critical mineral supply


Using artificial intelligence to reduce risks to critical mineral supply
Project development with back-ended and front-ended risk. Credit: Nature Communications (2024). DOI: 10.1038/s41467-024-51661-7

Australia risks losing its world-leading advantage in critical and rare minerals used for clean energy, electric vehicles and batteries for solar energy, unless it embraces artificial intelligence in the mining sector, according to research from Monash University and the University of Tasmania.

In a paper published in Nature Communications, the researchers argue artificial intelligence will revolutionize the mining of copper, lithium, nickel, zinc, cobalt and rare earth minerals used to produce clean energy technologies.

Australia is in a prime position to benefit with the world’s largest proven reserves of nickel and zinc, the second largest proven reserves of cobalt and copper and the world’s third largest proven reserves of bauxite. It is also the world’s largest producer of bauxite and lithium and is the third largest producer of cobalt.

Co-researcher Deputy Dean, Research, Professor Russell Smyth, from the Department of Economics at Monash University said to take advantage of these resources, Australia must embrace AI through all stages of the mining process.

“With the right policies and technological advancements, AI has the potential to transform the mining industry, making it more efficient, cost effective, less risky, and environmentally friendly,” said Professor Smyth.

Critical and rare minerals are a crucial part of achieving net zero emissions by 2050. But the International Energy Agency (IEA) has identified it takes 12.5 years from exploration to production, meaning investors see it as too risky.

In order to achieve global net zero by 2050, the IEA estimates investment of US $360-450 billion will be necessary by 2030, leading to an anticipated supply between US $180-220 billion. This implies an investment shortfall of up to US $230 billion.

Such a shortfall could lead to insufficient supply in the future, making decarbonization efforts more costly and potentially slowing them down. Professor Smyth said their research could help address a number of these issues.

“AI could improve processes such as mineral mapping by using drone-based photogrammetry and remote sensing; more accurately calculate the life of the mine and improve mining productivity including drilling and blasting performance,” said Professor Smyth.

“AI can also be used to reduce the required rate of return on investment by forecasting the risk of cost blow-outs, as well as equipment planning and predictive maintenance and management of equipment to minimize repairs.”

Co-researcher Associate Professor Joaquin Vespignani, from the Tasmanian School of Business and Economics at the University of Tasmania, said their theory suggests that back-ended critical mineral projects that have unaddressed technical and non-technical barriers, such as those involving lithium and cobalt, exhibit an additional risk for investors, which they term the back-ended risk premium.

“We show that the back-ended risk premium increases the cost of capital and, therefore, has the potential to reduce investment in the sector. We proposed that the back-ended risk premium may also reduce the gains in productivity expected from AI technologies in the mining sector,” Associate Professor Vespignani said.

“Progress in AI may, however, lessen the back-ended risk premium itself by shortening the duration of mining projects and the required rate of investment by reducing the associated risk. We conclude that the best way to reduce the costs associated with energy transition is for governments to invest heavily in AI mining technologies and research.

“Without significant investment by governments around the world in AI within the mining industry to increase productivity and improve environmental practices, there is a high risk that the clean energy transition will become costly for communities, potentially slowing down decarbonization efforts.”

More information:
Joaquin Vespignani et al, Artificial intelligence investments reduce risks to critical mineral supply, Nature Communications (2024). DOI: 10.1038/s41467-024-51661-7

Provided by
Monash University


Citation:
Using artificial intelligence to reduce risks to critical mineral supply (2024, October 1)
retrieved 1 October 2024
from https://techxplore.com/news/2024-10-artificial-intelligence-critical-mineral.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Tongan volcanic eruption triggered by explosion equivalent to ‘five underground nuclear bombs,’ new research reveals

0
Tongan volcanic eruption triggered by explosion equivalent to ‘five underground nuclear bombs,’ new research reveals


Tongan volcanic eruption triggered by explosion as big as 'five underground nuclear bombs'
Jinyin Hu and Dr. Thanh-Son Phạm. Credit: Jamie Kidston/ANU

The Hunga Tonga underwater volcano was one of the largest volcanic eruptions in history, and now, two years later, new research from The Australian National University (ANU) has revealed its main trigger. The research is published in the journal Geophysical Research Letters.

Until now, the cause of the cataclysmic event has remained largely a mystery to the scientific community, yet a student-led team of ANU seismologists has been able to shed new light on the natural explosion that initiated the event.

The student researchers analyzed the climactic event’s noisy but valuable seismic records to decipher its mysterious physical mechanism.

“Our findings confirm there was an explosion, possibly due to a gas-compressed rock, which released energy that equated to five of the largest underground nuclear explosions conducted by North Korea in 2017,” study co-author and ANU Ph.D. student, Jinyin Hu, said.

“Our model suggests the event resulted from the gas-compressed rock being trapped underneath a shallow sea, like an overcooked pressure cooker.

“This would be surprising to many because it had been commonly thought that the interaction of hot magma with cold seawater caused such massive underwater volcanic eruptions.

“We used a technique previously developed to study underground explosions for this natural explosion.”

Study co-author, Dr. Thanh-Son Pham, said the explosion caused a massive vertical push of water upwards into the atmosphere, causing tsunamis that reached as high as 45 meters at nearby islands.

“The water volume that was uplifted during the event was huge. Based on our estimates, there was enough water to fill about one million standard Olympic-sized swimming pools,” Dr. Phạm said.

Study co-author, Professor Hrvoje Tkalčić, from ANU, added, “Using seismic waveform modeling, we observed a significant vertical force pointing upward during the event. At first, we were confused by it. But then we realized that the solid earth rebounded upwards after the water column got uplifted.”

“A couple of weeks ago, we saw how seismology was used to explain an extraordinary sequence of events in Greenland that included a landslide due to glacial melting, a tsunami, and a seiche lasting for nine days observed globally.

“With Hunga Tonga, we have a relatively short-duration explosive event observed globally and, again, academically driven curiosity and forensic seismology at its best.”

According to the ANU seismologists, the Tonga eruption is the best instrumentally recorded event compared to events of similar sizes in the recent past.

“This is one of the largest events in our lifetime. Luckily, we had multiple ways to record the event, from data from satellite images to seismic sensors that record the sound waves and structure,” Mr. Hu said.

“There was another event that happened in 1991 that was a similar size in Pinatubo in the Philippines, but back then, monitoring systems weren’t as sophisticated as they are now.”

The ANU seismologists believe that monitoring the release of gases and micro-seismicity from volcanic sites can help better prepare for future events.

More information:
Jinyin Hu et al, A Composite Seismic Source Model for the First Major Event During the 2022 Hunga (Tonga) Volcanic Eruption, Geophysical Research Letters (2024). DOI: 10.1029/2024GL109442

Citation:
Tongan volcanic eruption triggered by explosion equivalent to ‘five underground nuclear bombs,’ new research reveals (2024, October 1)
retrieved 1 October 2024
from https://phys.org/news/2024-10-tongan-volcanic-eruption-triggered-explosion.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Syrian hamsters reveal genetic secret to hibernation

0
Syrian hamsters reveal genetic secret to hibernation


Syrian hamsters reveal genetic secret to hibernation
A hibernating Syrian hamster. Credit: Hibernation metabolism, physiology, and development group, Hokkaido University

Researchers have identified a gene that enables mammalian cells to survive for long periods at extremely low temperatures, which animals experience during hibernation.

Body temperatures below 10 degrees Celsius swiftly prove fatal for humans and many other mammals, because prolonged cold stress causes cells to accumulate damaging free radicals—in particular lipid peroxide radicals—resulting in cell death and organ failure. But a few mammalian species can survive cold stress by hibernating.

Hibernation in many small mammals involves cycles of days to weeks of deep torpor in which animals stop moving and their body temperature drops to extremely low levels, interspersed with short periods of normal body temperature and activity.

Now, a study led by Assistant Professor Masamitsu Sone and Professor Yoshifumi Yamaguchi of the University of Hokkaido, Japan, has identified a key gene that helps hibernating Syrian hamsters (golden hamsters, Mesocricetus auratus) to avoid cold-induced cell death. The findings were published in the journal Cell Death and Disease.

Syrian hamsters reveal genetic secret to hibernation
Syrian hamster cell cultures survive in cold conditions at 4°C (left). Human cell cultures typically exhibit cell death in cold conditions (middle), but when Gpx4 is overexpressed by cells in the culture, cell death rate drops drastically (right). Credit: Masamitsu Sone

To identify the gene, the researchers first engineered cold-sensitive human cancer cells to carry genes from cold-resistant hamster cells, and then they exposed the human cells to the repetition of prolonged cold conditions and rewarming from the cold.

By analyzing the genomes of the human cells that survived this cold exposure and rewarming stresses, the team could identify the hamster genes that had been incorporated into the human cells’ genome and enabled them to survive the cold.

This analysis revealed a likely candidate: the gene coding for glutathione peroxidase 4 (Gpx4), one of a family of proteins already known to reduce the impact of reactive oxygen species in mammalian cells.

When the activity of this gene was suppressed in hamster cells, either by engineering a knock-out version or by chemically suppressing its activity, the cells could only survive shorter periods of exposure to extreme cold—two days, instead of five—before they died due to the build-up of lipid peroxide.

Gpx4 is expressed in human and hamster cells, but only hamsters can hibernate, so the research team examined whether human Gpx4 and hamster Gpx4 behave differently. Interestingly, they found that even the human Gpx4 can provide cold protection when overexpressed in human cells.

“It’s still an open question why non-hibernator cells are much more vulnerable to cold stress than hibernator cells even though the expression levels of Gpx4 protein are comparable,” says Sone.

These findings are a first step towards finally understanding the mystery of how some mammals are able to safely hibernate through extreme cold. The discovery could have potential applications for human health, such as improving the long-term preservation of organs for transplantation using low temperatures, or in the use of hypothermia as a therapeutic tool.

More information:
Masamitsu Sone et al, Identification of genes supporting cold resistance of mammalian cells: lessons from a hibernator, Cell Death & Disease (2024). DOI: 10.1038/s41419-024-07059-w

Citation:
Syrian hamsters reveal genetic secret to hibernation (2024, October 1)
retrieved 1 October 2024
from https://phys.org/news/2024-10-syrian-hamsters-reveal-genetic-secret.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link