Thursday, November 28, 2024
Home Blog Page 1077

Researchers train sheep to complete awake MRI imaging

0
Researchers train sheep to complete awake MRI imaging


Training sheep to complete awake MRI imaging
A group of sheep. Credit: INRAE – Sophie Normant

Magnetic resonance imaging (MRI) is a technique commonly used to explore the brains of sheep. Until now, it had only been performed under general anesthesia, to ensure the animal’s immobility. Anesthesia, however, leads to stress and other negative side-effects, in addition to jeopardizing the study of brain activity.

A research team from INRAE has developed a training protocol adapted to sheep in order to carry out MRI acquisitions in animals while they are awake, without the need for restraint. To do this, researchers drew on previous work with dogs, which until now had been the only animal species capable of carrying out this type of protocol.

In the nursery of the Animal Physiology Facility experimental research unit (UEPAO), located at the INRAE Val de Loire center in Nouzilly, researchers began a familiarization phase as soon as the lambs were born. The objective was to identify which animals were most receptive to being stroked or to having foam objects placed near their heads.

The paper is published in the journal Behavior Research Methods.

After choosing 10 lambs, an initial training phase took place at the Nouzilly sheep farm. The research team trained the animals to climb a ramp to reach a mock MRI scanner and then lie down. The lambs were also taught to place their heads in a mock MRI coil.

Once they arrived at the real MRI room, the sheep were able to reproduce the same behavior very easily, but had some difficulty remaining perfectly still. It took a few weeks for the animals to get used to the vibrations of the machine and stop moving for a few minutes. Ultimately, the MRI images of their brains were comparable to those obtained from anesthetized sheep, a goal that was initially achieved in six out of the ten sheep trained at the time of writing, and has since been achieved in nine sheep. The protocol lasted nine months, from the birth of the lambs to the first MRI acquisitions.

Researchers train sheep to complete awake MRI imaging
Sheep performing brain MRI acquisition. A The head position in the sheep RF coil. The head is “blocked” with foam pieces adapted to each individual. A trainer is at the front of the bore with a hand placed on the sheep’s back; B The other trainer sat at the back of the bore, facing the sheep, to maintain visual contact. Credit: Behavior Research Methods (2024). DOI: 10.3758/s13428-024-02449-6

The success of this protocol is already opening up new avenues for research into animal neuroimaging (e.g., fMRI)—since it makes it possible to study brain function in awake animals. A study looking into the activation of certain brain regions in relation to hearing is currently underway, and is the subject of a Ph.D. thesis that relies on this training protocol.

This example of voluntary cooperation between trainer and sheep illustrates the animal’s ability to learn, and underlines the importance of human-animal relationships in the development of innovative methods. The study also opens up new possibilities for training other animals to carry out awake MRI scans. Such training methods could have numerous other applications, in areas such as shearing or medical training—when the animal learns to collaborate during veterinary care.

More information:
Camille Pluchot et al, Sheep (Ovis aries) training protocol for voluntary awake and unrestrained structural brain MRI acquisitions, Behavior Research Methods (2024). DOI: 10.3758/s13428-024-02449-6

Provided by
INRAE – National Research Institute for Agriculture, Food and Environment

Citation:
Researchers train sheep to complete awake MRI imaging (2024, July 1)
retrieved 1 July 2024
from https://phys.org/news/2024-07-sheep-mri-imaging.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Updated software improves slicing for large-format 3D printing

0
Updated software improves slicing for large-format 3D printing


Updated software improves slicing for large-format 3D printing
The full process for additive manufacturing as represented by a sphere to show the CAD design, wireframe mesh creation, toolpath generation and 3D printing process. Credit: Abby K. Barnes/ORNL, U.S. Dept. of Energy

Researchers at the Department of Energy’s Oak Ridge National Laboratory have developed the first additive manufacturing slicing computer application to simultaneously speed and simplify digital conversion of accurate, large-format three-dimensional parts in a factory production setting.

The technology, known as Slicer 2, can help widen the use of 3D printing for larger objects made from metallic and composite materials. Objects the size of a house and beyond are possible, such as land and aquatic vehicles and aerospace applications that include parts for reusable space vehicles.

Slicing software converts a computer-aided design, or CAD, digital model into a series of two-dimensional layers called slices. It calculates print parameters for each slice, such as printhead path and speed, and saves the information in numerically controlled computer language. The computer file contains instructions for a 3D printer to create a precise 3D version of the image.

“The quality of a 3D-printed object is directly related to the accuracy and complexity of the toolpaths that control the machine’s movements,” said ORNL researcher Alex Roschli. “ORNL Slicer 2 software connects directly with various types of 3D printers to create an integrated platform and communicates with sensors to increase print accuracy.”

Updated software improves slicing for large-format 3D printing
ORNL Slicer 2 screen capture showing the 45-degree toolpath for printing a wind turbine blade mold. Credit: Alex Roschli/ORNL, U.S. Dept. of Energy

Researchers designed ORNL Slicer 2 with more than 500 settings that control the internal structure, shape, temperature and other parameters of individual parts, layers or regions. It also interfaces with simulation software that shows complex heat and stress relationships during the additive manufacturing process. The software works with pellet thermoplastic, filament thermoplastic, thermoset, concrete, laser wire welding, MIG welding and blown-powder directed-energy deposition additive manufacturing systems.

“This connectivity translates into improved machine commands that increase reliability and repeatability of the additive manufacturing process,” said Roschli. “The result of this software is that additive manufacturers can produce large factory parts with fewer machines and less cost than traditional machining methods.”

ORNL Slicer 2 is an open-source computer program available on GitHub and used by more than 50 equipment manufacturers, industrial end users and universities.

Citation:
Updated software improves slicing for large-format 3D printing (2024, July 1)
retrieved 1 July 2024
from https://techxplore.com/news/2024-07-software-slicing-large-format-3d.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Study finds timing of rainfall crucial for flood prediction

0
Study finds timing of rainfall crucial for flood prediction


The timing of rainfall could help predict floods
Credit: University of Colorado at Boulder

With record rainfall projected to continue into the future, many worry extreme flooding will follow suit. But a new CIRES-led study published today in Science of the Total Environment found an increase in precipitation alone won’t necessarily increase disastrous flooding—instead, flood risk depends on how many days have passed between storms.

In the study, CIRES Fellow and Western Water Assessment director Ben Livneh and his colleagues, including CIRES Fellow Kris Karnauskas, looked for a new way to understand soil moisture and how it impacts flooding. The research team knew soil moisture is important when understanding floods, but measuring soils effectively is challenging.

So they found a proxy for soil moisture: precipitation intermittency, the length of a dry spell between precipitation events. Simply put: after a prolonged time since the last rain, it takes a larger storm to generate flooding; with fewer days between storms, a wider range of conditions can lead to flooding.

“We can actually understand changes in flood risk based on the number of days since the last rain event,” Livneh said. “We wanted to make it straightforward because soil water is hard to predict.”

The research focused on semi-arid and arid regions and looked at rain as a form of precipitation rather than snow. To create a value for precipitation intermittency, researchers looked at historical observations of 108 watersheds around the U.S. from 1950–2022. Through analysis of these observations, the goal was to understand whether wet or dry soils preceded heavy rain events—and how that influenced floods.

Soil moisture is notoriously difficult to estimate or simulate, results can vary from one person’s backyard to their front yard, and understanding how soil moisture influences flood events is even harder. Nels Bjarke, a Western Water Assessment postdoctoral researcher, ran the analysis for the study.

“We don’t have comprehensive observations of soil moisture that are continuous over space or continuous through time,” said Bjarke. “Therefore, it can be difficult to apply some sort of predictive framework for flooding using just soil moisture because the data are sparse.”

Yet, precipitation is widely measured, so the team tested precipitation as a proxy for soil moisture by looking at the timing of rain, rather than the amount.

Through analysis, the team created a timescale as a meaningful value for precipitation intermittency. They categorized intermittency into segments of five days. Ten days or less indicated low intermittency, when a high range of storms could produce floods.

Drier periods with 20 days or more between storms defined high intermittency, and only serious storms could produce floods. Overall, flood probabilities are 30% lower following long periods of dry spells.

The 2013 floods in Boulder are a real-life example of how precipitation intermittency is applied to flood projections. Seven days of heavy rain nearly doubled the previous record for rainfall. The event displaced hundreds and caused $2 billion in property damage, according to NOAA.

Forecasters and emergency managers could use the paper’s findings to anticipate very real flooding risks. Since wide-ranging observations of precipitation exist, forecasters can take the findings of this paper and use intermittency to help predict the likelihood of a flood.

“As we enter the era of big data, we can benefit from simple proxies like the dry-spell length as a way to more intuitively understand extreme events,” said Livneh.

More information:
Ben Livneh et al, Can precipitation intermittency predict flooding?, Science of The Total Environment (2024). DOI: 10.1016/j.scitotenv.2024.173824

Citation:
Study finds timing of rainfall crucial for flood prediction (2024, July 1)
retrieved 1 July 2024
from https://phys.org/news/2024-07-rainfall-crucial.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Citizen science is helping to study why some plants love the city life

0
Citizen science is helping to study why some plants love the city life


City fern, country fern: Citizen science is helping to study why some plants love the city life
Credit: American Journal of Botany (2024). DOI: 10.1002/ajb2.16364

In research published in the American Journal of Botany, University of Connecticut Department of Earth Sciences Assistant Professor in Residence Tammo Reichgelt has used citizen science data to show that while some ferns prefer to remain rural, others thrive in urban settings and could play a role in mitigating the urban heat island effect.

Reichgelt explains that the project was initially a hobby that grew after he noticed a peculiar phenomenon while on a walk: “I just happened to notice, ‘Wow, there’s a lot of ferns here. Strange.'”

Strange, because he was not walking around the woods or another area where ferns typically flourish, but was out on a stroll in the Rockville section of Vernon, surrounded by asphalt, concrete, and brick. Reichgelt is an active member of the iNaturalist community, which uses an app in which people log observations of plants and animals they spot around them. In iNaturalist, Reichgelt also began to notice this out-of-place presence of ferns in human-made environments, and found that it seemed to be the case only with specific fern species.

Despite this intriguing observation, there is limited research on species preference for urban environments. But from these casual observations, it started to look to him like some ferns appeared to thrive in urban settings, while others prefer the tranquility of rural life.

Reichgelt started visiting different towns around Connecticut to log new entries to see how pervasive these ferns are in urban environments.

“Especially in old mill towns in Connecticut, like Rockville, Willimantic, or Norwich, that tend to have a denser and older urban core, there were rock-dwelling ferns on buildings, bridges, and other structures. It seems like they particularly like railroad bridges and retaining walls,” says Reichgelt, who wondered if he could do an analysis explaining what drove species establishment in urban areas.

After obtaining more than 22,000 georeferenced observations–mostly from iNaturalist–for 16 rock-dwelling fern species found in the Northeast, Reichgelt overlaid the observational data with land-use data from the United States Geological Survey (USGS). The land-use data helped him to differentiate between natural environments and developed surfaces, and enabled a large-scale spatial analysis of the fern observations and the environments where they were spotted growing. Lo and behold, Reichgelt found a large difference between different rock-dwelling fern species throughout the Northeast.

“Over 50% of blunt cliff fern (Woodsia obtusa) observations were in highly developed areas,” he says. “The purple stem cliff brake (Pellaea atropurpurea) is more common south of Connecticut, but abundant in developed areas. A surprising species is the Tennessee bulblet fern (Cystopteris tennesseensis), which is rare in its natural range but seems to thrive in the Philadelphia suburbs.”

Through his analysis, Reichgelt found that the two significant climatic variables dividing the urban ferns from species that have remained rural are average summertime temperature and the highest summer temperature.

“That means these plants need to be adapted to hot environments to be able to survive in an urban environment. Urban areas, mainly because of the high heat capacity of building materials like concrete and asphalt, tend to be much hotter in summer than their rural surroundings, a phenomenon known as the urban heat island effect,” says Reichgelt. “That these fern species are pre-adapted to hotter environments seems to predispose them to be able to thrive in urban environments. For the 11 species that don’t grow in urban environments, it seems that they cannot tolerate the urban heat island effect.”

That doesn’t necessarily mean that the others do not grow on human-made vertical surfaces at all. Rather, they just don’t do it in urban environments, says Reichgelt.

One example is the rock polypody (Polypodium virginianum), which is happy growing on human-made walls as long as it has some protective cover from other vegetation. Another example is Mackay’s fragile fern (Cystopteris tenuis), which can be found growing on bridge abutments all over Connecticut, as long as it’s not in an urban environment. Reichgelt suspects these species are not found in cities, probably because the summer temperatures simply get too high there.

One useful aspect of using citizen science data like that from iNaturalist is that the observations can come from anywhere that citizen scientists are, and this adds a new variety to what kinds of data are available and makes studies like this possible. Reichgelt says that occurrence data generally comes from herbariums whose researchers usually collect in sample plots in natural environments.

“Very rarely would researchers go and make observations in urban environments. Citizen science data is an interesting new source of information that is rapidly crowding out other sources of information. The inclusion of urban habitats is an example of an advantage of using citizen science. Still, scientists have only just started to look into potential novel biases that are unique to citizen science.”

Reichgelt is a paleobotanist who studies Earth’s ancient climate states and the plants that thrived in them. This may seem like an unlikely background to start exploring why some ferns are expanding their habitats into urban settings.

In his paleoclimate studies, Reichgelt uses the modern range of plants to understand the climatic conditions a fossil plant may have lived in.

“Let’s say you find a palm fossil in the Arctic. I use the modern-day climate niche of palms to reconstruct the climate of the Arctic at the time. I base that on modern-day occurrence data. You can use a similar occurrence-based climate niche analysis to figure out why different fern species grow in urban environments.”

Reichgelt did not find many published studies exploring similar observations, and none for ferns specifically. This project has turned into an unexpectedly interesting rabbit hole that Reichgelt is excited to continue exploring. He plans to compare physiological differences between ferns in both environments and study aspects of the microenvironments where the ferns are observed,

“I would like to compare whether urban and rural ferns have different functional traits. For example, whether they differ in their water regulation or how they photosynthesize. In other words, what are the adaptations that allow certain ferns to thrive in urban environments?”

These findings are useful because knowing that these species thrive in human-made environments could be helpful for researchers looking for ways to mitigate human health-related issues in urban environments, such As urban heat islands, and ferns could be especially useful since they seem to thrive without human intervention.

“The weird thing about these urban ferns is that they grow in places that are not tended,” he says. “They’re native to the area but are weedy in these urban environments. Having something that can survive in an urban area and absorb the heat cools the city down. A tree has a large surface area, so focusing on trees or vines makes a lot of sense.

“Still, if you want to create a diverse urban ecosystem, you want to include a diverse array of species because an ecosystem builds from the ground up. Ferns are relatively easy and cheap, because urban fern establishment seems to literally just be a matter of lack of maintenance.”

More information:
Tammo Reichgelt, Linking the macroclimatic niche of native lithophytic ferns and their prevalence in urban environments, American Journal of Botany (2024). DOI: 10.1002/ajb2.16364

Citation:
City fern, country fern: Citizen science is helping to study why some plants love the city life (2024, July 1)
retrieved 1 July 2024
from https://phys.org/news/2024-07-city-fern-country-citizen-science.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Computer scientists develop new and improved camera inspired by the human eye

0
Computer scientists develop new and improved camera inspired by the human eye


UMD researchers develop new and improved camera inspired by the human eye
A diagram of the novel camera system developed by UMD computer scientists Botao He, Yiannis Aloimonos, Cornelia Fermuller, Jinxi Chen and Chahat Deep Singh. Credit: Botao He, Yiannis Aloimonos, Cornelia Fermuller, Jinxi Chen and Chahat Deep Singh.

A team led by University of Maryland computer scientists has invented a camera mechanism that improves how robots see and react to the world around them. Inspired by how the human eye works, their innovative camera system mimics the tiny involuntary movements used by the eye to maintain clear and stable vision over time.

The team’s prototyping and testing of the camera—called the Artificial Microsaccade-Enhanced Event Camera (AMI-EV)—is detailed in a paper published in the journal Science Robotics.

“Event cameras are a relatively new technology better at tracking moving objects than traditional cameras, but today’s event cameras struggle to capture sharp, blur-free images when there’s a lot of motion involved,” said the paper’s lead author Botao He, a computer science Ph.D. student at UMD.

“It’s a big problem because robots and many other technologies—such as self-driving cars—rely on accurate and timely images to react correctly to a changing environment. So, we asked ourselves: How do humans and animals make sure their vision stays focused on a moving object?”

For He’s team, the answer was microsaccades, small and quick eye movements that involuntarily occur when a person tries to focus their view. Through these minute yet continuous movements, the human eye can keep focus on an object and its visual textures—such as color, depth and shadowing—accurately over time.

“We figured that just like how our eyes need those tiny movements to stay focused, a camera could use a similar principle to capture clear and accurate images without motion-caused blurring,” He said.

The team successfully replicated microsaccades by inserting a rotating prism inside the AMI-EV to redirect light beams captured by the lens. The continuous rotational movement of the prism simulated the movements naturally occurring within a human eye, allowing the camera to stabilize the textures of a recorded object just as a human would. The team then developed software to compensate for the prism’s movement within the AMI-EV to consolidate stable images from the shifting lights.

UMD researchers develop new and improved camera inspired by the human eye
Depiction of novel event camera system versus standard event camera system. Credit: Botao He, Yiannis Aloimonos, Cornelia Fermuller, Jingxi Chen, Chahat Deep Singh

Study co-author Yiannis Aloimonos, a professor of computer science at UMD, views the team’s invention as a big step forward in the realm of robotic vision.

“Our eyes take pictures of the world around us and those pictures are sent to our brain, where the images are analyzed. Perception happens through that process and that’s how we understand the world,” explained Aloimonos, who is also director of the Computer Vision Laboratory at the University of Maryland Institute for Advanced Computer Studies (UMIACS). “When you’re working with robots, replace the eyes with a camera and the brain with a computer. Better cameras mean better perception and reactions for robots.”

The researchers also believe that their innovation could have significant implications beyond robotics and national defense. Scientists working in industries that rely on accurate image capture and shape detection are constantly looking for ways to improve their cameras—and AMI-EV could be the key solution to many of the problems they face.

“With their unique features, event sensors and AMI-EV are poised to take center stage in the realm of smart wearables,” said research scientist Cornelia Fermüller, senior author of the paper. “They have distinct advantages over classical cameras—such as superior performance in extreme lighting conditions, low latency and low power consumption. These features are ideal for virtual reality applications, for example, where a seamless experience and the rapid computations of head and body movements are necessary.”

In early testing, AMI-EV was able to capture and display movement accurately in a variety of contexts, including human pulse detection and rapidly moving shape identification. The researchers also found that AMI-EV could capture motion in tens of thousands of frames per second, outperforming most typically available commercial cameras, which capture 30 to 1000 frames per second on average.

This smoother and more realistic depiction of motion could prove to be pivotal in anything from creating more immersive augmented reality experiences and better security monitoring to improving how astronomers capture images in space.

“Our novel camera system can solve many specific problems, like helping a self-driving car figure out what on the road is a human and what isn’t,” Aloimonos said. “As a result, it has many applications that much of the general public already interacts with, like autonomous driving systems or even smartphone cameras. We believe that our novel camera system is paving the way for more advanced and capable systems to come.”

More information:
Botao He et al, Microsaccade-inspired event camera for robotics, Science Robotics (2024). DOI: 10.1126/scirobotics.adj8124

Citation:
Computer scientists develop new and improved camera inspired by the human eye (2024, July 1)
retrieved 1 July 2024
from https://techxplore.com/news/2024-07-scientists-camera-human-eye.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link