Wednesday, December 25, 2024
Home Blog Page 1382

New electronic skin mimics human touch with 3D architecture

0
New electronic skin mimics human touch with 3D architecture


3D architected electronic skin mimicking human mechanosensation
(A) Bio-inspired design of the 3D architected electronic skin (3DAE-skin). (B) 3DAE-skin attached to the finger tip of a robot hand. (C-G) Optical and microscope images of the 3DAE-skin. Credit: Science (2024). DOI: 10.1126/science.adk5556

Created by nature, the human skin shows powerful sensing capabilities that have been pursued by scientists for a very long time. However, it is challenging for today’s technologies to replicate the spatial arrangement of the complex 3D microstructure of human skin.

A research team led by Professor Yihui Zhang from Tsinghua University has developed a three-dimensionally architected electronic skin that mimics human mechanosensation for fully-decoupled sensing of normal force, shear force and strain.

Their findings were published in Science.

Taught by nature

Inspired by human skin, they created a three-dimensionally architected electronic skin with force and strain sensing components arranged in a 3D layout that mimics that of Merkel cells and Ruffini endings in human skin.

This 3DAE-Skin shows excellent decoupled sensing performances of normal force, shear force, and strain. It is the first-of-its-kind with force and strain sensing components arranged in a 3D layout that mimics that of slowly adapting mechanoreceptors in human skin.

Enchanted by artificial intelligence

With the assistance of artificial intelligence, they developed a tactile system for simultaneous modulus/curvature measurements of an object through touch. Demonstrations include rapid modulus measurements of fruits, bread, and cake with various shapes and degrees of freshness.

3D architected electronic skin mimicking human mechanosensation
Credit: Tsinghua University

The resulting technology provides rapid measurement capabilities of the friction coefficient and the modulus of an object with diverse shapes, with potential applications in freshness assessment, biomedical diagnosis, humanoid robots, prosthetic systems, among others.

Zhang’s study was done with colleagues from Tsinghua University’s Applied Mechanics Laboratory, Department of Engineering Mechanics and Laboratory of Flexible Electronics Technology.

More information:
Zhi Liu et al, A three-dimensionally architected electronic skin mimicking human mechanosensation, Science (2024). DOI: 10.1126/science.adk5556

Citation:
New electronic skin mimics human touch with 3D architecture (2024, June 4)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-electronic-skin-mimics-human-3d.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Using AI to decode dog vocalizations

0
Using AI to decode dog vocalizations


Using AI to decode dog vocalizations
An AI tool developed at the University of Michigan can tell playful barks from aggressive ones—as well as identifying the dog’s age, sex and breed. Credit: Marcin Szczepanski/Michigan Engineering.

Have you ever wished you could understand what your dog is trying to say to you? University of Michigan researchers are exploring the possibilities of AI, developing tools that can identify whether a dog’s bark conveys playfulness or aggression.

The same models can also glean other information from animal vocalizations, such as the animal’s age, breed and sex. A collaboration with Mexico’s National Institute of Astrophysics, Optics and Electronics (INAOE) Institute in Puebla, the study finds that AI models originally trained on human speech can be used as a starting point to train new systems that target animal communication.

The results were presented at the Joint International Conference on Computational Linguistics, Language Resources and Evaluation. The study is published on the arXiv preprint server.

“By using speech processing models initially trained on human speech, our research opens a new window into how we can leverage what we built so far in speech processing to start understanding the nuances of dog barks,” said Rada Mihalcea, the Janice M. Jenkins Collegiate Professor of Computer Science and Engineering, and director of U-M’s AI Laboratory.

“There is so much we don’t yet know about the animals that share this world with us. Advances in AI can be used to revolutionize our understanding of animal communication, and our findings suggest that we may not have to start from scratch.”

One of the prevailing obstacles to developing AI models that can analyze animal vocalizations is the lack of publicly available data. While there are numerous resources and opportunities for recording human speech, collecting such data from animals is more difficult.

“Animal vocalizations are logistically much harder to solicit and record,” said Artem Abzaliev, lead author and U-M doctoral student in computer science and engineering. “They must be passively recorded in the wild or, in the case of domestic pets, with the permission of owners.”

Using AI to decode dog vocalizations
Artem Abzaliev and his dog, Nova, in Nuremberg, Germany. The AI software he developed with Rada Mihalcea and Humberto Pérez-Espinosa can identify whether a dog’s bark is playful or aggressive as well as identifying breed, sex and age. Credit: Abzaliev

Because of this dearth of usable data, techniques for analyzing dog vocalizations have proven difficult to develop, and the ones that do exist are limited by a lack of training material. The researchers overcame these challenges by repurposing an existing model that was originally designed to analyze human speech.

This approach enabled the researchers to tap into robust models that form the backbone of the various voice-enabled technologies we use today, including voice-to-text and language translation. These models are trained to distinguish nuances in human speech, like tone, pitch and accent, and convert this information into a format that a computer can use to identify what words are being said, recognize the individual speaking, and more.

“These models are able to learn and encode the incredibly complex patterns of human language and speech,” Abzaliev said. “We wanted to see if we could leverage this ability to discern and interpret dog barks.”

The researchers used a dataset of dog vocalizations recorded from 74 dogs of varying breed, age and sex, in a variety of contexts. Humberto Pérez-Espinosa, a collaborator at INAOE, led the team who collected the dataset. Abzaliev then used the recordings to modify a machine-learning model—a type of computer algorithm that identifies patterns in large data sets. The team chose a speech representation model called Wav2Vec2, which was originally trained on human speech data.

With this model, the researchers were able to generate representations of the acoustic data collected from the dogs and interpret these representations. They found that Wav2Vec2 not only succeeded at four classification tasks; it also outperformed other models trained specifically on dog bark data, with accuracy figures up to 70%.

“This is the first time that techniques optimized for human speech have been built upon to help with the decoding of animal communication,” Mihalcea said. “Our results show that the sounds and patterns derived from human speech can serve as a foundation for analyzing and understanding the acoustic patterns of other sounds, such as animal vocalizations.”

In addition to establishing human speech models as a useful tool in analyzing animal communication—which could benefit biologists, animal behaviorists and more—this research has important implications for animal welfare. Understanding the nuances of dog vocalizations could greatly improve how humans interpret and respond to the emotional and physical needs of dogs, thereby enhancing their care and preventing potentially dangerous situations, the researchers said.

More information:
Artem Abzaliev et al, Towards Dog Bark Decoding: Leveraging Human Speech Processing for Automated Bark Classification, arXiv (2024). DOI: 10.48550/arxiv.2404.18739

Journal information:
arXiv


Citation:
Using AI to decode dog vocalizations (2024, June 4)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-ai-decode-dog-vocalizations.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Study presents novel protocol structure for achieving finite-time consensus of multi-agent systems

0
Study presents novel protocol structure for achieving finite-time consensus of multi-agent systems


IEEE/CAA Journal of Automatica Sinica study presents novel protocol structure for achieving finite-time consensus of multi-agent systems
The new protocol structure ensures global and semi-global finite time consensus for both leaderless and leader-following multi-agent systems and allows the calculation of an upper-bound for settling time for the closed-loop system. Credit: Chinese Association of Automation

Consensus problems, where a group of agents, such as unmanned vehicles, machines, or robots, need to agree on certain variables only through local communication within themselves, have attracted considerable attention as a fundamental issue in cooperative control of multi-agent systems. Simply put, a multi-agent system comprises multiple decision-making agents that interact among themselves in a common environment to achieve common or conflicting goals depending on the situation.

Depending on whether agents track a predetermined leader, these problems can be classified into leaderless or leader-following consensus. Researchers have extensively studied both types of problems and developed consensus protocols. However, most current protocols only provide asymptotic consensus.

Some applications require exact consensus in a limited time or finite-time consensus. Achieving such a consensus results in improved control accuracy and stability. In practical applications, finite-time consensus requires considerable control effort. However, there are physical limitations to control effort, which if neglected, can degrade controller performance.

Studies have explored solutions for finite-time control methods subject to constraints, but most methods rely on homogeneity theory, in which ensuring convergence of consensus is difficult, and an exact settling time is hard to estimate.

Addressing these issues, a team of researchers, including Professor Zongyu Zuo, Mr. Jingchuan Tan, and Mr. Ruiqi Ke, all from Beihang University, China, and IEEE Fellow Professor Qing-Long Han from Swinburne University of Technology, Australia, developed a novel protocol structure for achieving global and semi-global finite-time consensus for both leaderless and leader-following multi-agent systems. Their study was published in the IEEE/CAA Journal of Automatica Sinica.

The team was motivated by a fascination with the potential of robotic systems and artificial intelligence to transform our daily lives and tackle complex societal challenges efficiently and sustainably. Prof. Zuo intuitively explains their work, “Imagine a group of dancers who need to perform a synchronized routine, without directly seeing each other, only following cues from those nearby. Our work is akin to creating a set of rules that helps these dancers synchronize perfectly in a short time, ensuring everyone performs beautifully together even if they have limitations in how quickly they can move.”

The protocols presented in the study use a hyperbolic tangent function, instead of the non-smooth saturation function used in traditional protocols. These protocols guarantee global and semi-global finite-time consensus for integrator and double integrator type systems, respectively. Moreover, they also allow explicit calculation of an upper limit for settling time and a user-prescribed bounded control level for closed-loop systems, making them highly practical and valuable for real-world applications.

Additionally, unlike traditional protocols, the hyperbolic tangent function avoids the need to determine input saturation for each agent, simplifying the design and stability analysis of the protocols. The researchers demonstrated the effectiveness of the new protocol structure through illustrative examples for single- and double-integrator multi-agent systems and by applying it to a practical system with multiple direct current motors.

Highlighting the practical applications of this study, Prof. Zuo says, “These protocols have broad applications, such as autonomous drone fleets for agricultural or surveillance tasks, coordinated control of robotic arms, and synchronized traffic light systems.

“Ultimately, our research could improve the efficiency and reliability of autonomous systems. For example, better traffic management systems could reduce congestion and pollution, while more coordinated disaster response robots could save lives during crises.”

Overall, the innovative protocol structure marks a significant achievement in the field of consensus problems, leading to enhanced multi-agent autonomous systems.

More information:
Zongyu Zuo et al, Hyperbolic Tangent Function-Based Protocols for Global/Semi-Global Finite-Time Consensus of Multi-Agent Systems, IEEE/CAA Journal of Automatica Sinica (2024). DOI: 10.1109/JAS.2024.124485

Provided by
Chinese Association of Automation

Citation:
Study presents novel protocol structure for achieving finite-time consensus of multi-agent systems (2024, June 12)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-protocol-finite-consensus-multi-agent.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

The YouTube algorithm isn’t radicalizing people, says bots study

0
The YouTube algorithm isn't radicalizing people, says bots study


youtube
Credit: Pixabay/CC0 Public Domain

About a quarter of Americans get their news on YouTube. With its billions of users and hours upon hours of content, YouTube is one the largest online media platforms in the world.

In recent years, there has been a popular narrative in the media that videos from highly partisan, conspiracy theory-driven YouTube channels radicalize young Americans and that YouTube’s recommendation algorithm leads users down a path of increasingly radical content.

However, a new study from the Computational Social Science Lab (CSSLab) at the University of Pennsylvania finds that users’ own political interests and preferences play the primary role in what they choose to watch. In fact, if the recommendation features have any impact on users’ media diets, it is a moderating one.

“On average, relying exclusively on the recommender results in less partisan consumption,” says lead author Homa Hosseinmardi, associate research scientist at the CSSLab.

YouTube bots

To determine the true effect of YouTube’s recommendation algorithm on what users watch, the researchers created bots that either followed the recommendation engines or completely ignored them. To do this, the researchers created bots trained on the YouTube watch history from a set of 87,988 real-life users collected from October 2021 to December 2022.

Hosseinmardi and co-authors Amir Ghasemian, Miguel Rivera-Lanas, Manoel Horta Ribeiro, Robert West, and Duncan J. Watts aimed to untangle the complex relationship between user preferences and the recommendation algorithm, a relationship that evolves with each video watched.

These bots were assigned individualized YouTube accounts so that their viewing history could be tracked, and the partisanship of what they watched was estimated using the metadata associated with each video.

During two experiments, the bots, each with its own YouTube account, went through a “learning phase”—they watched the same sequence of videos to ensure that they all presented the same preferences to YouTube’s algorithm.

Next, bots were placed into groups. Some bots continued to follow the watching history of the real life user it was trained on; others were assigned to be experimental “counterfactual bots”—bots following specific rules designed to separate user behavior from algorithmic influence.

In experiment one, after the learning phases, the control bot continued to watch videos from the user’s history, while counterfactual bots deviated from users’ real-life behavior and only selected videos from the list of recommended videos without taking the user preferences into account.

Some counterfactual bots always selected the first (“up next”) video from the sidebar recommendations; others randomly selected one of the top 30 videos listed in the sidebar recommendations; and others randomly selected a video from the top 15 videos in the homepage recommendations.

The researchers found that the counterfactual bots, on average, consumed less partisan content than the corresponding real user—a result that is stronger for heavier consumers of partisan content.

“This gap corresponds to an intrinsic preference of users for such content relative to what the algorithm recommends,” Hosseinmardi says. “The study exhibits similar moderating effects on bots consuming far-left content, or when bots are subscribed to channels on the extreme side of the political partisan spectrum.”

‘Forgetting time’ of recommendation algorithms

In experiment two, researchers aimed to estimate the “forgetting time” of the YouTube recommender.

“Recommendation algorithms have been criticized for continuing to recommend problematic content to previously interested users long after they have lost interest in it themselves,” Hosseinmardi says.

During this experiment, researchers calculated the recommender’s forgetting time for a user with a long (120 video) history of far-right video consumption who changes their diet to moderate news for the next 60 videos.

While the control bots continued watching a far-right diet for the whole experiment, counterfactual bots simulated a user “switching” from one set of preferences (watching far-right videos) to another (watching moderate videos). As the counterfactual bots changed their media preferences, the researchers tracked the average partisanship of recommended videos in the sidebar and homepage.

“On average, the recommended videos on the sidebar shifted toward moderate content after about 30 videos,” Hosseinmardi says, “while homepage recommendations tended to adjust less rapidly, showing homepage recommendations cater more to one’s preferences and sidebar recommendations are more related to the nature of the video currently being watched.”

“The YouTube recommendation algorithm has been accused of leading its users toward conspiratorial beliefs. While these accusations hold some merit, we must not overlook that users have a significant agency over their actions and may have viewed the same content, or worse, even without any recommendations,” Hosseinmardi says.

Moving forward, the researchers hope that others can adopt their method for studying AI-mediated platforms where user preferences and algorithms interact in order to understand better the role that algorithmic content recommendation engines play in our daily lives.

The findings are published in the journal Proceedings of the National Academy of Sciences.

More information:
Homa Hosseinmardi et al, Causally estimating the effect of YouTube’s recommender system using counterfactual bots, Proceedings of the National Academy of Sciences (2024). DOI: 10.1073/pnas.2313377121

Citation:
The YouTube algorithm isn’t radicalizing people, says bots study (2024, February 20)
retrieved 24 June 2024
from https://techxplore.com/news/2024-02-youtube-algorithm-isnt-radicalizing-people.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Not quite ready for autonomous taxis? Teledriving could be a bridge

0
Not quite ready for autonomous taxis? Teledriving could be a bridge


Not quite ready for autonomous taxis? Teledriving could be a bridge
A Vay driver operates a vehicle remotely from a control center that features multiple video feeds. Tele-driving could offer many of the efficiencies that autonomous vehicles are expected to provide, according to research at U-M. Credit: Vay.

At a time when the general public may not yet accept driverless taxis and ride-hailing vehicles, teledriving could offer many of the same benefits, according to a new study led by a University of Michigan researcher. The research is published in the journal Management Science.

With more cars on the road, fewer drivers and fewer riderless miles, ride-sharing services could become faster and more affordable.

Teledriving typically involves a driver operating a car while sitting in front of a bank of screens that show video feeds from cameras on the car, as well as sensors and augmented reality technology. Once a passenger has been picked up, transported and dropped off, the driver can disconnect from that vehicle and connect to any other that is available in an area of need. Several private teledriving companies are already in operation, including Halo.Car and Vay in Las Vegas.

One of the main advantages of teledriving, according to researchers led by Saif Benjaafar, U-M professor of industrial and operations engineering, is that drivers do not need to be where the vehicles are—for instance, driving from areas of low demand to areas of high demand with no rider. In this vein, teledriving can eliminate what ride hailing services refer to as the ‘wild goose chase’ scenario. At times when vehicle supply is low, cars need to be dispatched to customers located far away even if it’s not the most efficient pairing between rider and driver.

“Teledriving allows you to get away with far fewer drivers than vehicles without impacting the quality of service because you can still leverage the excess vehicles to get quickly to customers—a reduction of 30% to 40% in some of the test cases we considered,” said Benjaafar, who specializes in supply chains and logistics.

“There’s an opportunity to significantly increase how busy the drivers are. One of the challenges for ride services has always been having drivers who are sitting idle. Quite a bit of that inefficiency can be eliminated.”

The remaining drivers also stand to benefit as this system would shift vehicle ownership, and the cost and risk that it entails, onto the rideshare company. Teledriving may broaden labor participation as driving becomes a desk job, the researchers suggest.

Finally, the team is optimistic that the separation of drivers and riders could improve the safety of both, particularly women who have been disproportionately the targets of in-vehicle assault and other criminal behavior. However, teledriving systems also need to guard against reckless driving in a work environment that feels more like a video game.

Using computer modeling that factors in supply, demand and road congestion over both time and space, researchers showed that a higher number of available vehicles than drivers can shorten wait times in periods of high demand, even with fewer drivers. This is because the likelihood of a driver going on a wild goose chase is reduced.

“This research can be applied to the world/community by improving the efficiency and cost-effectiveness of ride-hailing and other on-demand transportation services,” said Xiaotang Yang, a postdoctoral fellow at the University of Toronto’s Rotman School of Management.

“By using teledriving, platforms can potentially operate with fewer drivers while maintaining or even improving service quality, which can lower operational costs and make these services more accessible. Additionally, this approach can help reduce traffic congestion and waiting times, leading to a better overall experience for users.”

Benjaafar’s deep dive into the benefits and efficiencies of teledriving comes as efforts to bring autonomous ride services into the mainstream have stalled for a variety of reasons. Safety is key among them, and a regular stream of news articles chronicling crashes involving vehicles without human drivers only reinforces those concerns.

“Full autonomy may take longer to become a reality,” Benjaafar said. “In the meantime, there are these technologies that can serve as a bridge toward full autonomy, including putting the human driver back into the loop.”

The research was initiated while Benjaafar and Yang were at the University of Minnesota. They were joined by Zicheng Wang, now an assistant professor at the Chinese University of Hong Kong-Shenzhen.

More information:
Saif Benjaafar et al, Human in the Loop Automation: Ride-Hailing with Remote (Tele-)Drivers, Management Science (2024). DOI: 10.1287/mnsc.2022.01687

Citation:
Not quite ready for autonomous taxis? Teledriving could be a bridge (2024, June 20)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-ready-autonomous-taxis-teledriving-bridge.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link