Sunday, February 23, 2025
Home Blog Page 1839

A novel elderly care robot could soon provide personal assistance, enhancing seniors’ quality of life

0
A novel elderly care robot could soon provide personal assistance, enhancing seniors' quality of life


A novel elderly care robot could soon provide personal assistance, enhancing seniors' quality of life
General scheme of ADAM elements from back and front view. Credit: Frontiers in Neurorobotics (2024). DOI: 10.3389/fnbot.2024.1337608

Worldwide, humans are living longer than ever before. According to data from the United Nations, approximately 13.5% of the world’s people were at least 60 years old in 2020, and by some estimates, that figure could increase to nearly 22% by 2050.

Advanced age can bring cognitive and/or physical difficulties, and with more and more elderly individuals potentially needing assistance to manage such challenges, advances in technology may provide the necessary help.

One of the newest innovations comes from a collaboration between researchers at Spain’s Universidad Carlos III and the manufacturer Robotnik. The team has developed the Autonomous Domestic Ambidextrous Manipulator (ADAM), an elderly care robot that can assist people with basic daily functions. The team reports on its work in Frontiers in Neurorobotics.

ADAM, an indoor mobile robot that stands upright, features a vision system and two arms with grippers. It can adapt to homes of different sizes for safe and optimal performance. It respects users’ personal space while helping with domestic tasks and learning from its experiences via an imitation learning method.

On a practical level, ADAM can pass through doors and perform everyday tasks such as sweeping a floor, moving objects and furniture as needed, setting a table, pouring water, preparing a simple meal, and bringing items to a user upon request.






Credit: Gonzalo Espinoza / Universidad Carlos III Robotics Lab

In their review of existing developments in this arena, the researchers describe several robots that have been recently developed and adapted to assist elderly individuals with both cognitive tasks (such as memory training and games to help alleviate dementia symptoms) and physical tasks (such as detection of falls by users, followed by notification actions; monitoring and assisting users with managing usage of home automation systems; and providing assistance such as retrieving items from the floor and storing items in user-inaccessible areas in the home).

Against this backdrop, the team behind this new work aimed to design a robot with unique features to assist users with physical tasks in their own homes.

Next-level personal care through modular design and a learning platform

Several features set ADAM apart from existing personal care robots. The first is its modular design, which includes a base, cameras, arms and hands providing multiple sensory inputs. Each of these units can work independently or cooperatively at a high or low level. Importantly, this means that the robot can support research while meeting users’ personal care needs.

In addition, ADAM’s arms themselves are collaborative, allowing for user operation, and can move according to the parameters of the immediate environment. Moreover, as a basic safety feature of the robot’s design, it continuously considers the people present in the environment in order to avoid collisions while providing personal care.

A novel elderly care robot could soon provide personal assistance, enhancing seniors' quality of life
Visual description of the ADAM service robotic platform and its four main capabilities for the development of elderly care tasks: perception of the environment, navigation and environmental comprehension, social navigation and manipulation learning. Credit: Frontiers in Neurorobotics (2024). DOI: 10.3389/fnbot.2024.1337608

Technical aspects

ADAM stands 160 cm tall—about the height of a petite human adult. Its arms, whose maximum load capacity is 3 kg, extend to a width of 50 cm. The researchers point out that they designed the robot to “simulate the structure of a human torso and arms. This is because a human-like structure allows it to work more comfortably in domestic environments because the rooms, doors, and furniture are adapted to humans.”

Batteries in ADAM’s base power its movements, cameras, and 3D LiDAR sensors. With all systems running, the robot’s minimum battery life is just under four hours, and battery charging takes a little over two hours. It can rotate in place and move forward and backward, but not laterally.

ADAM includes two internally connected computers—one for the base and the other for the arms—and a WiFi module for external communication. An RGBD camera and 2D LiDAR help to control basic forward movement, complemented by additional RGBD and LiDAR sensors positioned higher in the unit that expand its perception angle and range.

A novel elderly care robot could soon provide personal assistance, enhancing seniors' quality of life
Visualization of the ADAM model in simulation, where the reference systems of the base and arms can be seen. The reference frame transformations between them are schematically represented. Credit: Frontiers in Neurorobotics (2024). DOI: 10.3389/fnbot.2024.1337608

The additional RGBD sensor is a Realsense D435 depth camera that includes an RGB module and infrared stereo vision, while the additional LiDAR sensor provides 3D spatial details that work with a geometric mapping algorithm to map the entirety of objects in the environment.

The approximate range of motion of ADAM’s arms is 360o, and a parallel gripper system (the “Duck Gripper”) comprises its hands. Within this system is an independent power supply and a Raspberry Pi Zero 2 W board that communicates via WiFi to a corresponding robot operating system (ROS) node. Force-sensing resistors (FSRs) on each gripper jaw help the hands grasp and pick up objects with appropriate amounts of force.

Acing an early test involving collaboration

The researchers report that they have successfully tested ADAM as part of the Heterogeneous Intelligent Multi-Robot Team for Assistance of Elderly People (HIMTAE) project. Collaborating with researchers from Spain’s University of Cartagena and Sweden’s University of Örebro, they presented ADAM as an integral part of a team including multiple robots and home automation systems.

A novel elderly care robot could soon provide personal assistance, enhancing seniors' quality of life
Information captured by the perception system. The main sources of information are the RGB image and the corresponding depth values from the RGBD sensor and the 3D spatial information from the LiDAR sensor, which covers a full room. Credit: Frontiers in Neurorobotics (2024). DOI: 10.3389/fnbot.2024.1337608

Within the test, another robot (“Robwell”) had established an “empathetic relationship” with users, who wore bracelets to monitor their mental and physical states and communicate them to Robwell.

Roswell, in turn, would remind the users to drink water when needed and communicate with both the home automation system and ADAM regarding specific user needs. ADAM’s role was to perform tasks within the kitchen, preparing and delivering food or water to Robwell, which would then provide it to the users.

The users who participated in the test returned an average value of 93% satisfaction with its outcome. The researchers note that employing two robots was effective; Robwell could monitor and engage with users while ADAM worked in the kitchen. Users were also able to enter the kitchen and interact with ADAM while it performed tasks, and ADAM could likewise interact with users while they performed tasks.

  • A novel elderly care robot could soon provide personal assistance, enhancing seniors' quality of life
    Duck Gripper final design with an exploded view of the gripper and its main components. Credit: Frontiers in Neurorobotics (2024). DOI: 10.3389/fnbot.2024.1337608
  • A novel elderly care robot could soon provide personal assistance, enhancing seniors' quality of life
    Duck Gripper performance test movements. From left to right, and from top to bottom. The gripper grabs the object on the workstation. The gripper displaces away from the robot. The end effector rotates 90° clockwise. The gripper displaces toward the robot. The end effector rotates 90° counterclockwise. The gripper opens to release the object. Credit: Frontiers in Neurorobotics (2024). DOI: 10.3389/fnbot.2024.1337608

What’s needed next?

As the HIMTAE test results were obtained within a controlled laboratory environment, the team cautions that future tests must take place in authentic domestic environments to determine user satisfaction with ADAM’s performance.

Looking ahead, the researchers observe, “The perception system is fixed, so in certain situations, ADAM will not be able to detect specific parts of the environment. The bimanipulation capabilities of ADAM are not fully developed, and the arms configuration is not optimized.” In addition to focusing on improvements in these areas, they write that “new task and motion planning strategies will be implemented to deal with more complex home tasks, which will make ADAM a much more complete robot companion for elderly care.”

More information:
Alicia Mora et al, ADAM: a robotic companion for enhanced quality of life in aging populations, Frontiers in Neurorobotics (2024). DOI: 10.3389/fnbot.2024.1337608

© 2024 Science X Network

Citation:
A novel elderly care robot could soon provide personal assistance, enhancing seniors’ quality of life (2024, February 19)
retrieved 24 June 2024
from https://techxplore.com/news/2024-02-elderly-robot-personal-seniors-quality.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Car dealers across US are crippled by a second cyberattack

0
Car dealers across US are crippled by a second cyberattack


car dealership
Credit: Unsplash/CC0 Public Domain

Auto retailers across the U.S. suffered a second major disruption in as many days due to another cyberattack at CDK Global, the software provider on which thousands of dealers rely to run their stores.

CDK informed customers on Thursday of the incident that had occurred late the prior evening. The company shut down most of its systems again, saying in a recorded update that it doesn’t have an estimate for how long it will take to restore services.

“Our dealers’ systems will not be available at a minimum on Thursday,” the company said.

On what otherwise would have been a busy U.S. holiday for business, dealers reliant on CDK were unable to use its systems to complete transactions, access customer records, schedule appointments or handle car-repair orders. The company serves almost 15,000 dealerships, supporting front-office salespeople, back-office support staff and parts-and-service shops.

AutoNation Inc. led shares of publicly listed dealership groups lower Thursday, falling as much as 4.6% in intraday trading. Lithia Motors Inc., Group 1 Automotive Inc. and Sonic Automotive Inc. also slumped.

Greg Thornton, the general manager of a dealership group in Frederick, Maryland, said his stores’ CDK customer-relations software had been down since early Wednesday morning.

“I can only assume that CDK is working all hands on deck to resolve this,” said Thornton, whose group includes Audi and Volvo stores. “We’ve had no conversations with them in person or over the phone.”

Sam Pack’s Five Star Chevrolet outside Dallas sold four vehicles on Wednesday despite the initial outage, but has had to adapt, such as by handling some tasks on paper until service is restored, said Alan Brown, the store’s general manager. While sales staff are able to submit approvals to lenders, the outage has blocked other elements of a transaction, such as obtaining titles.

“We’re still doing business,” Brown said. “It’s just not our normal flow.”

The CDK provider hasn’t yet provided a timeline for when its systems will be available again, he said.

The National Automobile Dealers Association said Wednesday it was actively seeking information from CDK to determine the nature and scope of the cyber-incident.

CDK was spun off by Automatic Data Processing Inc. in 2014, then agreed to be acquired in April 2022 by the investment company Brookfield Business Partners in an all-cash deal valued at $6.4 billion.

2024 Bloomberg L.P. Distributed by Tribune Content Agency, LLC.

Citation:
Car dealers across US are crippled by a second cyberattack (2024, June 20)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-car-dealers-crippled-cyberattack.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

New electronic skin mimics human touch with 3D architecture

0
New electronic skin mimics human touch with 3D architecture


3D architected electronic skin mimicking human mechanosensation
(A) Bio-inspired design of the 3D architected electronic skin (3DAE-skin). (B) 3DAE-skin attached to the finger tip of a robot hand. (C-G) Optical and microscope images of the 3DAE-skin. Credit: Science (2024). DOI: 10.1126/science.adk5556

Created by nature, the human skin shows powerful sensing capabilities that have been pursued by scientists for a very long time. However, it is challenging for today’s technologies to replicate the spatial arrangement of the complex 3D microstructure of human skin.

A research team led by Professor Yihui Zhang from Tsinghua University has developed a three-dimensionally architected electronic skin that mimics human mechanosensation for fully-decoupled sensing of normal force, shear force and strain.

Their findings were published in Science.

Taught by nature

Inspired by human skin, they created a three-dimensionally architected electronic skin with force and strain sensing components arranged in a 3D layout that mimics that of Merkel cells and Ruffini endings in human skin.

This 3DAE-Skin shows excellent decoupled sensing performances of normal force, shear force, and strain. It is the first-of-its-kind with force and strain sensing components arranged in a 3D layout that mimics that of slowly adapting mechanoreceptors in human skin.

Enchanted by artificial intelligence

With the assistance of artificial intelligence, they developed a tactile system for simultaneous modulus/curvature measurements of an object through touch. Demonstrations include rapid modulus measurements of fruits, bread, and cake with various shapes and degrees of freshness.

3D architected electronic skin mimicking human mechanosensation
Credit: Tsinghua University

The resulting technology provides rapid measurement capabilities of the friction coefficient and the modulus of an object with diverse shapes, with potential applications in freshness assessment, biomedical diagnosis, humanoid robots, prosthetic systems, among others.

Zhang’s study was done with colleagues from Tsinghua University’s Applied Mechanics Laboratory, Department of Engineering Mechanics and Laboratory of Flexible Electronics Technology.

More information:
Zhi Liu et al, A three-dimensionally architected electronic skin mimicking human mechanosensation, Science (2024). DOI: 10.1126/science.adk5556

Citation:
New electronic skin mimics human touch with 3D architecture (2024, June 4)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-electronic-skin-mimics-human-3d.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Using AI to decode dog vocalizations

0
Using AI to decode dog vocalizations


Using AI to decode dog vocalizations
An AI tool developed at the University of Michigan can tell playful barks from aggressive ones—as well as identifying the dog’s age, sex and breed. Credit: Marcin Szczepanski/Michigan Engineering.

Have you ever wished you could understand what your dog is trying to say to you? University of Michigan researchers are exploring the possibilities of AI, developing tools that can identify whether a dog’s bark conveys playfulness or aggression.

The same models can also glean other information from animal vocalizations, such as the animal’s age, breed and sex. A collaboration with Mexico’s National Institute of Astrophysics, Optics and Electronics (INAOE) Institute in Puebla, the study finds that AI models originally trained on human speech can be used as a starting point to train new systems that target animal communication.

The results were presented at the Joint International Conference on Computational Linguistics, Language Resources and Evaluation. The study is published on the arXiv preprint server.

“By using speech processing models initially trained on human speech, our research opens a new window into how we can leverage what we built so far in speech processing to start understanding the nuances of dog barks,” said Rada Mihalcea, the Janice M. Jenkins Collegiate Professor of Computer Science and Engineering, and director of U-M’s AI Laboratory.

“There is so much we don’t yet know about the animals that share this world with us. Advances in AI can be used to revolutionize our understanding of animal communication, and our findings suggest that we may not have to start from scratch.”

One of the prevailing obstacles to developing AI models that can analyze animal vocalizations is the lack of publicly available data. While there are numerous resources and opportunities for recording human speech, collecting such data from animals is more difficult.

“Animal vocalizations are logistically much harder to solicit and record,” said Artem Abzaliev, lead author and U-M doctoral student in computer science and engineering. “They must be passively recorded in the wild or, in the case of domestic pets, with the permission of owners.”

Using AI to decode dog vocalizations
Artem Abzaliev and his dog, Nova, in Nuremberg, Germany. The AI software he developed with Rada Mihalcea and Humberto Pérez-Espinosa can identify whether a dog’s bark is playful or aggressive as well as identifying breed, sex and age. Credit: Abzaliev

Because of this dearth of usable data, techniques for analyzing dog vocalizations have proven difficult to develop, and the ones that do exist are limited by a lack of training material. The researchers overcame these challenges by repurposing an existing model that was originally designed to analyze human speech.

This approach enabled the researchers to tap into robust models that form the backbone of the various voice-enabled technologies we use today, including voice-to-text and language translation. These models are trained to distinguish nuances in human speech, like tone, pitch and accent, and convert this information into a format that a computer can use to identify what words are being said, recognize the individual speaking, and more.

“These models are able to learn and encode the incredibly complex patterns of human language and speech,” Abzaliev said. “We wanted to see if we could leverage this ability to discern and interpret dog barks.”

The researchers used a dataset of dog vocalizations recorded from 74 dogs of varying breed, age and sex, in a variety of contexts. Humberto Pérez-Espinosa, a collaborator at INAOE, led the team who collected the dataset. Abzaliev then used the recordings to modify a machine-learning model—a type of computer algorithm that identifies patterns in large data sets. The team chose a speech representation model called Wav2Vec2, which was originally trained on human speech data.

With this model, the researchers were able to generate representations of the acoustic data collected from the dogs and interpret these representations. They found that Wav2Vec2 not only succeeded at four classification tasks; it also outperformed other models trained specifically on dog bark data, with accuracy figures up to 70%.

“This is the first time that techniques optimized for human speech have been built upon to help with the decoding of animal communication,” Mihalcea said. “Our results show that the sounds and patterns derived from human speech can serve as a foundation for analyzing and understanding the acoustic patterns of other sounds, such as animal vocalizations.”

In addition to establishing human speech models as a useful tool in analyzing animal communication—which could benefit biologists, animal behaviorists and more—this research has important implications for animal welfare. Understanding the nuances of dog vocalizations could greatly improve how humans interpret and respond to the emotional and physical needs of dogs, thereby enhancing their care and preventing potentially dangerous situations, the researchers said.

More information:
Artem Abzaliev et al, Towards Dog Bark Decoding: Leveraging Human Speech Processing for Automated Bark Classification, arXiv (2024). DOI: 10.48550/arxiv.2404.18739

Journal information:
arXiv


Citation:
Using AI to decode dog vocalizations (2024, June 4)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-ai-decode-dog-vocalizations.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Study presents novel protocol structure for achieving finite-time consensus of multi-agent systems

0
Study presents novel protocol structure for achieving finite-time consensus of multi-agent systems


IEEE/CAA Journal of Automatica Sinica study presents novel protocol structure for achieving finite-time consensus of multi-agent systems
The new protocol structure ensures global and semi-global finite time consensus for both leaderless and leader-following multi-agent systems and allows the calculation of an upper-bound for settling time for the closed-loop system. Credit: Chinese Association of Automation

Consensus problems, where a group of agents, such as unmanned vehicles, machines, or robots, need to agree on certain variables only through local communication within themselves, have attracted considerable attention as a fundamental issue in cooperative control of multi-agent systems. Simply put, a multi-agent system comprises multiple decision-making agents that interact among themselves in a common environment to achieve common or conflicting goals depending on the situation.

Depending on whether agents track a predetermined leader, these problems can be classified into leaderless or leader-following consensus. Researchers have extensively studied both types of problems and developed consensus protocols. However, most current protocols only provide asymptotic consensus.

Some applications require exact consensus in a limited time or finite-time consensus. Achieving such a consensus results in improved control accuracy and stability. In practical applications, finite-time consensus requires considerable control effort. However, there are physical limitations to control effort, which if neglected, can degrade controller performance.

Studies have explored solutions for finite-time control methods subject to constraints, but most methods rely on homogeneity theory, in which ensuring convergence of consensus is difficult, and an exact settling time is hard to estimate.

Addressing these issues, a team of researchers, including Professor Zongyu Zuo, Mr. Jingchuan Tan, and Mr. Ruiqi Ke, all from Beihang University, China, and IEEE Fellow Professor Qing-Long Han from Swinburne University of Technology, Australia, developed a novel protocol structure for achieving global and semi-global finite-time consensus for both leaderless and leader-following multi-agent systems. Their study was published in the IEEE/CAA Journal of Automatica Sinica.

The team was motivated by a fascination with the potential of robotic systems and artificial intelligence to transform our daily lives and tackle complex societal challenges efficiently and sustainably. Prof. Zuo intuitively explains their work, “Imagine a group of dancers who need to perform a synchronized routine, without directly seeing each other, only following cues from those nearby. Our work is akin to creating a set of rules that helps these dancers synchronize perfectly in a short time, ensuring everyone performs beautifully together even if they have limitations in how quickly they can move.”

The protocols presented in the study use a hyperbolic tangent function, instead of the non-smooth saturation function used in traditional protocols. These protocols guarantee global and semi-global finite-time consensus for integrator and double integrator type systems, respectively. Moreover, they also allow explicit calculation of an upper limit for settling time and a user-prescribed bounded control level for closed-loop systems, making them highly practical and valuable for real-world applications.

Additionally, unlike traditional protocols, the hyperbolic tangent function avoids the need to determine input saturation for each agent, simplifying the design and stability analysis of the protocols. The researchers demonstrated the effectiveness of the new protocol structure through illustrative examples for single- and double-integrator multi-agent systems and by applying it to a practical system with multiple direct current motors.

Highlighting the practical applications of this study, Prof. Zuo says, “These protocols have broad applications, such as autonomous drone fleets for agricultural or surveillance tasks, coordinated control of robotic arms, and synchronized traffic light systems.

“Ultimately, our research could improve the efficiency and reliability of autonomous systems. For example, better traffic management systems could reduce congestion and pollution, while more coordinated disaster response robots could save lives during crises.”

Overall, the innovative protocol structure marks a significant achievement in the field of consensus problems, leading to enhanced multi-agent autonomous systems.

More information:
Zongyu Zuo et al, Hyperbolic Tangent Function-Based Protocols for Global/Semi-Global Finite-Time Consensus of Multi-Agent Systems, IEEE/CAA Journal of Automatica Sinica (2024). DOI: 10.1109/JAS.2024.124485

Provided by
Chinese Association of Automation

Citation:
Study presents novel protocol structure for achieving finite-time consensus of multi-agent systems (2024, June 12)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-protocol-finite-consensus-multi-agent.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link