Friday, January 31, 2025
Home Blog Page 1664

Engineers create GPS-like smart pills with AI

0
Engineers create GPS-like smart pills with AI


From wearables to swallowables: USC Engineering researchers create GPS-like smart pills with AI
From Wearables to Swallowables: USC Engineering Researchers Create GPS-like Smart Pills with AI. Credit: Khan Lab at USC Viterbi School of Engineering

Imagine finding your location without GPS. Now apply this to tracking an item in the body. This has been the challenge with tracking “smart” pills—pills equipped with smart sensors–once swallowed. At the USC Viterbi School of Engineering, innovations in wearable electronics and AI have led to the development of ingestible sensors that not only detect stomach gases but also provide real-time location tracking.

Developed by the Khan Lab, these capsules are tailored to identify gases associated with gastritis and gastric cancers. The research, to be published in Cell Reports Physical Science, shows how these smart pills have been accurately monitored through a newly designed wearable system. This breakthrough represents a significant step forward in ingestible technology, which Yasser Khan, an Assistant Professor of Electrical and Computer Engineering at USC, believes could someday serve as a “Fitbit for the gut” and for early disease detection.

While wearables with sensors hold a lot of promise to track body functions, the ability to track ingestible devices within the body has been limited. However, with innovations in materials, the miniaturization of electronics, as well as new protocols developed by Khan, researchers have demonstrated the ability to track the location of devices specifically in the GI tract.

Khan’s team with the USC Institute for Technology and Medical Systems Innovation (ITEMS) at the Michelson Center for Convergent Biosciences, placed a wearable coil that generates a magnetic field on a t-shirt. This field, coupled with a trained neural network, allows his team to locate the capsule within the body. According to Ansa Abdigazy, lead author of the work and a Ph.D. student in the Khan Lab, this has not been demonstrated with a wearable before.

The second innovation within this device is the newly created “sensing” material. Capsules are outfitted not just with electronics for tracking location but with “optical sensing membrane that is selective to gases.” This membrane is comprised of materials whose electrons change their behavior within the presence of ammonia gas.

Ammonia—is a component of H pylori—gut bacteria that, when elevated, could be a signal of peptic ulcer, gastric cancer, or irritable bowel syndrome. Thus, says Khan, “The presence of this gas is a proxy and can be used as an early disease detection mechanism.”

The USC team has tested this ingestible device in many different environments including liquid environments and simulating a bovine intestine. “The ingestible system with the wearable coil is both compact and practical, offering a clear path for application in human health,” says Khan. The device is currently patent pending and the next step is to test these wearables with swine models.

Beyond the use of this device for early detection of peptic ulcers, gastritis, and gastric cancers, there is potential to monitor brain health. How? Because of the brain-gut axis. Neurotransmitters reside in the gut and “how they’re upregulated and downregulated have a correlation to neurodegenerative diseases,” says Khan.

This focus on the brain is the ultimate goal of Khan’s research. He is interested in developing non-invasive ways to detect neurotransmitters related to Parkinson’s and Alzheimer’s.

More information:
Angsagan Abdigazy et al, 3D gas mapping in the gut with AI-enabled ingestible and wearable electronics, Cell Reports Physical Science (2024). DOI: 10.1016/j.xcrp.2024.101990

Citation:
From wearables to swallowables: Engineers create GPS-like smart pills with AI (2024, June 14)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-wearables-swallowables-gps-smart-pills.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

A fully edible robot could soon end up on our plate, say scientists

0
A fully edible robot could soon end up on our plate, say scientists


A fully edible robot could soon end up on our plate, say scientists
Artistic rendering of a future edible robot. Credit: Nature Reviews Materials (2024). DOI: 10.1038/s41578-024-00688-9

A fully edible robot could soon end up on our plate if we overcome some technical hurdles, say EPFL scientists involved in RoboFood—a project which aims to marry robots and food.

Robots and food have long been distant worlds: Robots are inorganic, bulky, and non-disposable; food is organic, soft, and biodegradable. Yet, research that develops edible robots has progressed recently and promises positive impacts: Robotic food could reduce electronic waste, help deliver nutrition and medicines to people and animals in need, monitor health, and even pave the way to novel gastronomical experiences.

But how far are we from having a fully edible robot for lunch or dessert? And what are the challenges? Scientists from the RoboFood project, based at EPFL, address these and other questions in a perspective article in the journal Nature Reviews Materials.

“Bringing robots and food together is a fascinating challenge,” says Dario Floreano, director of the Laboratory of Intelligent Systems at EPFL and first author of the article. In 2021, Floreano joined forces with Remko Boom from Wageningen University, The Netherlands, Jonathan Rossiter from the University of Bristol, UK, and Mario Caironi from the Italian Institute of Technology, to launch the project RoboFood.

In the perspective article, RoboFood authors analyze which edible ingredients can be used to make edible robot parts and whole robots, and discuss the challenges of making them.

“We are still figuring out which edible materials work similarly to non-edible ones,” says Floreano. For example, gelatin can replace rubber, rice cookies are akin to foam, a chocolate film can protect robots in humid environments, and mixing starch and tannin can mimic commercial glues.

Robots au chocolat for dessert?
Credit: Ecole Polytechnique Federale de Lausanne

These and other edible materials make up the ingredients of robotic components. “There is a lot of research on single edible components like actuators, sensors, and batteries,” says Bokeon Kwak, a postdoc in the group of Floreano and one of the authors.

In 2017, EPFL scientists successfully produced an edible gripper, a gelatin-made structure that could handle an apple and be eaten afterward. EPFL, IIT, and the University of Bristol recently developed a new conductive ink that can be sprayed on food to sense its growth. The ink contains activated carbon as a conductor, while Haribo gummy bears are used as a binder. Other sensors can perceive pH, light, and bending.

In 2023, IIT researchers realized the first rechargeable edible battery using riboflavin (vitamin B2) and quercetin (found in almonds and capers) in the battery poles, adding activated carbon to facilitate electron transport and nori algae, used to wrap sushi, to prevent short circuits. Packaged with beeswax, the 4 cm wide edible battery can operate at 0.65 volts, still a safe voltage in case of ingestion; two edible batteries connected in series can power a light-emitting diode for about 10 minutes.

Once the components are ready, the goal is to produce fully edible robots. To date, scientists have succeeded in assembling partially edible robotic systems.

In 2022, researchers from EPFL and the Wageningen University designed a drone with wings out of rice cookies glued with gelatin. Scientists at EPFL and IIT have also created a partially edible rolling robot that uses pneumatic gelatin legs and an edible tilt sensor.

Before writing the recipe for fully edible robots, researchers face several challenges. One of them is the lack of understanding of how humans and animals perceive processed food with reactive and autonomous behavior. Also, fully edible electronics that use transistors and process information are still difficult to make.

“But the biggest technical challenge is putting together the parts that use electricity to function, like batteries and sensors, with those that use fluids and pressure to move, like actuators,” says Kwak. After integrating all components, scientists need to miniaturize them, increase the shelf life of robotic food… and give robots a pleasant taste.

More information:
Dario Floreano et al, Towards edible robots and robotic food, Nature Reviews Materials (2024). DOI: 10.1038/s41578-024-00688-9

Citation:
A fully edible robot could soon end up on our plate, say scientists (2024, June 14)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-fully-edible-robot-plate-scientists.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Researchers create more precise 3D reconstructions using only two camera perspectives

0
Researchers create more precise 3D reconstructions using only two camera perspectives


From two images to a 3D object
Fields of application for 3D reconstructions include autonomous driving and monument conservation. Credit: Technical University Munich

In recent years, neural methods have become widespread in camera-based reconstructions. In most cases, however, hundreds of camera perspectives are needed. Meanwhile, conventional photometric methods exist which can compute highly precise reconstructions even from objects with textureless surfaces. However, these typically work only under controlled lab conditions.

Daniel Cremers, professor of Computer Vision and Artificial Intelligence at TUM and leader of the Munich Center for Machine Learning (MCML) and a director of the Munich Data Science Institute (MDSI) has developed a method together with his team that utilizes the two approaches.

It combines a neural network of the surface with a precise model of the illumination process that considers the light absorption and the distance between the object and the light source. The brightness in the images is used to determine the angle and distance of the surface relative to the light source.

“That enables us to model the objects with much greater precision than existing processes. We can use the natural surroundings and can reconstruct relatively textureless objects for our reconstructions,” says Cremers.

The paper is published on the arXiv preprint server and will be presented at the Conference on Computer Vision and Pattern Recognition (CVPR 2024) held in Seattle from June 17 to June 21, 2024.

Applications in autonomous driving and preservation of historical artifacts

The method can be used to preserve historical monuments or digitize museum exhibits. If these are destroyed or decay over time, photographic images can be used to reconstruct the originals and create authentic replicas.

The team of Prof. Cremers also develops neural camera-based reconstruction methods for autonomous driving, where a camera films the vehicle’s surroundings. The autonomous car can model its surroundings in real-time, develop a three-dimensional representation of the scene, and use it to make decisions.

The process is based on neural networks that predict 3D point clouds for individual video images that are then merged into a large-scale model of the roads traveled.

More information:
Mohammed Brahimi et al, Sparse Views, Near Light: A Practical Paradigm for Uncalibrated Point-light Photometric Stereo, arXiv (2024). DOI: 10.48550/arxiv.2404.00098

Journal information:
arXiv


Citation:
Researchers create more precise 3D reconstructions using only two camera perspectives (2024, June 20)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-precise-3d-reconstructions-camera-perspectives.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Turing test study shows humans rate artificial intelligence as more ‘moral’ than other people

0
Turing test study shows humans rate artificial intelligence as more 'moral' than other people


human-AI
Credit: Pixabay/CC0 Public Domain

A new study has found that when people are presented with two answers to an ethical question, most will think the answer from artificial intelligence (AI) is better than the response from another person.

“Attributions Toward Artificial Agents in a Modified Moral Turing Test,” a study conducted by Eyal Aharoni, an associate professor in Georgia State’s Psychology Department, was inspired by the explosion of ChatGPT and similar AI large language models (LLMs) which came onto the scene last March.

“I was already interested in moral decision-making in the legal system, but I wondered if ChatGPT and other LLMs could have something to say about that,” Aharoni said. “People will interact with these tools in ways that have moral implications, like the environmental implications of asking for a list of recommendations for a new car. Some lawyers have already begun consulting these technologies for their cases, for better or for worse.”

“So, if we want to use these tools, we should understand how they operate, their limitations and that they’re not necessarily operating in the way we think when we’re interacting with them.”

To test how AI handles issues of morality, Aharoni designed a form of a Turing test.

“Alan Turing, one of the creators of the computer, predicted that by the year 2000, computers might pass a test where you present an ordinary human with two interactants, one human and the other a computer, but they’re both hidden and their only way of communicating is through text. Then the human is free to ask whatever questions they want to in order to try to get the information they need to decide which of the two interactants is human and which is the computer,” Aharoni said.

“If the human can’t tell the difference, then, by all intents and purposes, the computer should be called intelligent, in Turing’s view.”

For his Turing test, Aharoni asked undergraduate students and AI the same ethical questions and then presented their written answers to participants in the study. They were then asked to rate the answers for various traits, including virtuousness, intelligence and trustworthiness.

“Instead of asking the participants to guess if the source was human or AI, we just presented the two sets of evaluations side by side, and we just let people assume that they were both from people,” Aharoni said. “Under that false assumption, they judged the answers’ attributes like ‘How much do you agree with this response, which response is more virtuous?'”

Overwhelmingly, the ChatGPT-generated responses were rated more highly than the human-generated ones.

“After we got those results, we did the big reveal and told the participants that one of the answers was generated by a human and the other by a computer and asked them to guess which was which,” Aharoni said.

For an AI to pass the Turing test, humans must not be able to tell the difference between AI responses and human ones. In this case, people could tell the difference, but not for an obvious reason.

“The twist is that the reason people could tell the difference appears to be because they rated ChatGPT’s responses as superior,” Aharoni said. “If we had done this study five to 10 years ago, then we might have predicted that people could identify the AI because of how inferior its responses were. But we found the opposite—that the AI, in a sense, performed too well.”

According to Aharoni, this finding has interesting implications for the future of humans and AI.

“Our findings lead us to believe that a computer could technically pass a moral Turing test—that it could fool us in its moral reasoning. Because of this, we need to try to understand its role in our society because there will be times when people don’t know that they’re interacting with a computer, and there will be times when they do know and they will consult the computer for information because they trust it more than other people,” Aharoni said.

“People are going to rely on this technology more and more, and the more we rely on it, the greater the risk becomes over time.”

The findings are published in the journal Scientific Reports.

More information:
Eyal Aharoni et al, Attributions toward artificial agents in a modified Moral Turing Test, Scientific Reports (2024). DOI: 10.1038/s41598-024-58087-7

Citation:
Turing test study shows humans rate artificial intelligence as more ‘moral’ than other people (2024, May 6)
retrieved 24 June 2024
from https://techxplore.com/news/2024-05-turing-humans-artificial-intelligence-moral.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

European tech must keep pace with US, China: Meta’s Clegg

0
European tech must keep pace with US, China: Meta's Clegg


innovation
Credit: CC0 Public Domain

Europe is lagging behind both the United States and China when it comes to technology and innovation, top executive with US firm Meta Nick Clegg has told AFP.

Clegg, president of global affairs at the parent company of Facebook, Instagram and WhatsApp, said Europe had a “real problem”.

“We are falling very rapidly behind the US and China,” said Clegg, who was promoting a scheme to mentor startups on the continent.

“I think for too long, the view has been that Europe’s only role is to regulate. And then China imitates and America innovates.”

But he argued it was not possible to “build success on the back of a law”.

“You build success on the back of innovation, entrepreneurship, and a partnership between big tech companies and small startups.”

Clegg was promoting a scheme run by Meta and two French companies to offer five European startups six months of mentoring and access to their facilities.

Clegg has spearheaded previous efforts by Meta to invest in tech in Europe, announcing in 2021 that the US firm would create 10,000 jobs there to help build the “metaverse”.

Meta burnt through billions of dollars trying to make its metaverse project a reality but has since changed focus to artificial intelligence and announced thousands of layoffs, including in the teams working on the metaverse.

© 2024 AFP

Citation:
European tech must keep pace with US, China: Meta’s Clegg (2024, June 24)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-european-tech-pace-china-meta.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link