Thursday, January 9, 2025
Home Blog Page 1483

People have no difficulty getting to grips with an extra thumb, study finds

0
People have no difficulty getting to grips with an extra thumb, study finds


Public have no difficulty getting to grips with an extra thumb, study finds
Image showing the Third Thumb design from the perspective of the wearer. Credit: Dani Clode Design & The Plasticity Lab

Cambridge researchers have shown that members of the public have little trouble in learning very quickly how to use a third thumb—a controllable, prosthetic extra thumb—to pick up and manipulate objects.

The team tested the robotic device on a diverse range of participants, which they say is essential for ensuring new technologies are inclusive and can work for everyone. The results are published in Science Robotics.

An emerging area of future technology is motor augmentation—using motorized wearable devices such as exoskeletons or extra robotic body parts to advance our motor capabilities beyond current biological limitations.

While such devices could improve the quality of life for healthy individuals who want to enhance their productivity, the same technologies can also provide people with disabilities new ways to interact with their environment.






Video showing one of our younger participants wearing the Child sized Third Thumb and performing the ‘Individuation’ task with pegs. Credit: Dani Clode Design & The Plasticity Lab

Professor Tamar Makin from the Medical Research Council (MRC) Cognition and Brain Sciences Unit at the University of Cambridge said, “Technology is changing our very definition of what it means to be human, with machines increasingly becoming a part of our everyday lives, and even our minds and bodies.

“These technologies open up exciting new opportunities that can benefit society, but it’s vital that we consider how they can help all people equally, especially marginalized communities who are often excluded from innovation research and development.

“To ensure everyone will have the opportunity to participate and benefit from these exciting advances, we need to explicitly integrate and measure inclusivity during the earliest possible stages of the research and development process.”

Dani Clode, a collaborator within Professor Makin’s lab, has developed the Third Thumb, an extra robotic thumb aimed at increasing the wearer’s range of movement, enhancing their grasping capability and expanding the carrying capacity of the hand. This allows the user to perform tasks that might be otherwise challenging or impossible to complete with one hand or to perform complex multi-handed tasks without having to coordinate with other people.






Video showing a participant wearing the Third Thumb and performing the ‘Individuation’ task with pegs. Credit: Dani Clode Design & The Plasticity Lab

The Third Thumb is worn on the opposite side of the palm to the biological thumb and controlled by a pressure sensor placed under each big toe or foot. Pressure from the right toe pulls the Thumb across the hand, while the pressure exerted with the left toe pulls the Thumb up toward the fingers. The extent of the Thumb’s movement is proportional to the pressure applied, and releasing pressure moves it back to its original position.

In 2022, the team had the opportunity to test the Third Thumb at the annual Royal Society Summer Science Exhibition, where members of the public of all ages were able to use the device during different tasks.

Over the course of five days, the team tested 596 participants, ranging in age from three to 96 years old and from a wide range of demographic backgrounds. Of these, only four were unable to use the Third Thumb, either because it did not fit their hand securely, or because they were unable to control it with their feet (the pressure sensors developed specifically for the exhibition were not suitable for very lightweight children).

Participants were given up to a minute to familiarize themselves with the device, during which time the team explained how to perform one of two tasks.

Public have no difficulty getting to grips with an extra thumb, study finds
The Third Thumb worn by different users. Credit: Dani Clode Design / The Plasticity Lab

The first task involved picking up pegs from a pegboard one at a time with just the Third Thumb and placing them in a basket. Participants were asked to move as many pegs as possible in 60 seconds. A total of 333 participants completed this task.

The second task involved using the Third Thumb together with the wearer’s biological hand to manipulate and move five or six different foam objects. The objects were of various shapes that required different manipulations to be used, increasing the dexterity of the task. Again, participants were asked to move as many objects as they could into the basket within a maximum of 60 seconds. In all, 246 participants completed this task.

Almost everyone was able to use the device straightaway. 98% of participants were able to successfully manipulate objects using the Third Thumb during the first minute of use, with only 13 participants unable to perform the task.

Ability levels between participants were varied, but there were no differences in performance between genders, nor did handedness change performance—despite the Thumb always being worn on the right hand. There was no definitive evidence that people who might be considered ‘good with their hands’—for example, they were learning to play a musical instrument, or their jobs involved manual dexterity—were any better at the tasks.

Public have no difficulty getting to grips with an extra thumb, study finds
The Third Thumb helping the user to open a bottle. Credit: Dani Clode Design / The Plasticity Lab

Older and younger adults had a similar level of ability when using the new technology, though further investigation just within the older adults age bracket revealed a decline in performance with increasing age. The researchers say this effect could be due to the general degradation in sensorimotor and cognitive abilities that are associated with aging and may also reflect a generational relationship to technology.

Performance was generally poorer among younger children. Six out of the 13 participants that could not complete the task were below the age of 10 years old, and of those that did complete the task, the youngest children tended to perform worse compared to older children. But even older children (aged 12-16 years) struggled more than young adults.

Dani said, “Augmentation is about designing a new relationship with technology—creating something that extends beyond being merely a tool to becoming an extension of the body itself. Given the diversity of bodies, it’s crucial that the design stage of wearable technology is as inclusive as possible. It’s equally important that these devices are accessible and functional for a wide range of users. Additionally, they should be easy for people to learn and use quickly.”

Co-author Lucy Dowdall, also from the MRC Cognition and Brain Science Unit, added, “If motor augmentation—and even broader human-machine interactions—are to be successful, they’ll need to integrate seamlessly with the user’s motor and cognitive abilities. We’ll need to factor in different ages, genders, weight, lifestyles, disabilities—as well as people’s cultural, financial backgrounds, and even likes or dislikes of technology. Physical testing of large and diverse groups of individuals is essential to achieve this goal.”

There are countless examples where a lack of inclusive design considerations has led to technological failure:

  • Automated speech recognition systems that convert spoken language to text have been shown to perform better listening to white voices over Black voices.
  • Some augmented reality technologies have been found to be less effective for users with darker skin tones.
  • Women face a higher health risk from car accidents, due to car seats and seatbelts being primarily designed to accommodate ‘average’ male-sized dummies during crash testing.
  • Hazardous power and industrial tools designed for a right-hand dominant use or grip have resulted in more accidents when operated by left-handers forced to use their non-dominant hand.

More information:
Dani Clode et al, Evaluating Initial Usability of a Hand Augmentation Device Across a Large and Diverse Sample, Science Robotics (2024). DOI: 10.1126/scirobotics.adk5183. www.science.org/doi/10.1126/scirobotics.adk5183

Citation:
Third Thumb: People have no difficulty getting to grips with an extra thumb, study finds (2024, May 29)
retrieved 24 June 2024
from https://techxplore.com/news/2024-05-people-difficulty-extra-thumb.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

New algorithm makes CT scan more accessible

0
New algorithm makes CT scan more accessible


A peek inside art objects: new algorithm makes CT scan more accessible
An example of the scans that can be made using Bossema’s method. Left: Python Killing a Gnu, Antoine-Louis Barye (J. Paul Getty Museum, 85.SE.48). Middle: X-ray of Python Killing a Gnu, with ball bearings visible as black dots. Right: Cross-section through the CT reconstruction, showing the structure of the object and various materials used. Credit: Leiden University

An X-ray scanner, some small metal balls, and a newly developed algorithm. That is all you need to make a 3D model that enables you to look inside art objects without dismantling them. Thanks to the research of Francien Bossema (Centrum Wiskunde & Informatica and Leiden Institute of Advanced Computer Science), museums can now use existing X-ray equipment as CT scanners, without having to buy such a costly and complicated device.

What is on the inside of an art object? To answer that question, art experts can use an X-ray machine. Some museums own one for inspecting their objects. They use the machine to see whether an object has woodworm, for example, and to what extent. But such X-rays have drawbacks.

You see everything on top of each other with no depth, so you can never really make a cross-section of the object. A CT scanner can do that but not many museums can afford one. Bossema and her supervisor Joost Batenburg wondered: can we make better use of what we already have?

X-ray machine becomes aspiring CT scanner

A CT scanner is actually an X-ray scanner that captures the object from all angles. So you take hundreds or thousands of X-rays in a row. You then use a reconstruction algorithm to use those photos to create a 3D model of the object, which you can digitally slice in different directions.

With a professional CT scanner, as in a hospital, the knowledge of the exact position of all parts is automated. Bossema has now developed an algorithm to gather that knowledge after the scan has been made. Thus, a simple X-ray scanner becomes an aspiring CT scanner.

Metal balls as placeholders

We’ve heard about the X-ray scanner and the algorithm, but what about those small metal balls? Bossema said, “To make a CT scan, you need to be able to move the X-ray machine around the object. When you do that, you have to know exactly where everything was during the scan. Where is the source in relation to the turntable? How many degrees are we rotated between two X-rays? Where is the detector located? All these places you need to know very precisely. That’s why we put small metal balls next to the object.”

These balls have a very high density and become thick black dots on the X-ray photo. “We look for black dots on those X-rays, which naturally move when you turn the object. With these reference points, you can calculate how much the object has been rotated. If you know that for all the photos, you can construct a 3D image of the object,” she added.

Building bridges between the beta and art world

Bossema tested the algorithm at four different locations, including three museums. At the Rijksmuseum in Amsterdam and the British Museum in London, she did the measurements herself. At the J. Paul Getty Museum in Los Angeles, she provided instructions only via e-mail and Zoom. With this, Bossema concludes that the method could be generally applicable.

She said, “If you know the Python programming language, you can basically use my software. But for art experts, it might be a bridge too far.” A more accessible user interface could help, but that is beyond Bossema’s research. She hopes someone will have the time and space to take the project further.

Building bridges between science and art research really attracts Bossema. “My research also really has a practical application. I have not only written my own articles about the algorithm and the technique behind it, but I also co-wrote articles by colleagues, because I have collaborated with my technique on projects by other researchers at the museum. I really like that, that my research in turn also facilitates the work of my colleagues,” she explained.

For now, Bossema is not ready to leave the museum world. This summer, she will spend ten weeks working with CT scans at the Getty in Los Angeles, and she is also a postdoc fellow at the Rijksmuseum Amsterdam.

Besides mathematics, Bossema studied science communication. This helped her a lot during her Ph.D. research, she says. “This project involves a lot of communication because I work with people at the museum who have a very different background from mine. They often don’t know what an algorithm is, or what a CT scanner can do for their work. I find it very much fun and important to understand what they need. Not everyone in mathematics finds this communication aspect interesting. So that does make me unique as a researcher.”

Provided by
Leiden University


Citation:
A peek inside art objects: New algorithm makes CT scan more accessible (2024, June 11)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-peek-art-algorithm-ct-scan.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Innovative bird eye–inspired camera developed for enhanced object detection

0
Innovative bird eye–inspired camera developed for enhanced object detection


Innovative bird-eye-inspired camera developed for enhanced object detection
Figure 1. Structures and functions of bird’s eye. (a) Bird vision. (b) Deep central fovea and four types of cones. (c) Foveated vision and tetrachromatic vision. Credit: Science Robotics (2024). DOI: 10.1126/scirobotics.adk6903

The eyes of raptors can accurately perceive prey from kilometers away. Is it possible to model camera technology after birds’ eyes? Researchers have developed a new type of camera that is inspired by the structures and functions of birds’ eyes. A research team led by Prof. Kim Dae-Hyeong at the Center for Nanoparticle Research within the Institute for Basic Science (IBS), in collaboration with Prof. Song Young Min at the Gwangju Institute of Science and Technology (GIST), has developed a perovskite-based camera specializing in object detection.

The work is published in the journal Science Robotics.

The eyes of different organisms in the natural world have evolved and been optimized to suit their habitat. As a result of countless years of evolutionary adaptation and flying at high altitudes, bird eyes have developed unique structures and visual functions.

In the retina of an animal’s eye, there is a small pit called the fovea that refracts the light entering the eye. Unlike the shallow foveae found in human eyes, bird eyes have deep central foveae, which refract the incoming light to a large extent. The region of the highest cone density lies within the foveae (Figure 1b), allowing the birds to clearly perceive distant objects through magnification (Figure 1c). This specialized vision is known as foveated vision.

While human eyes can only see visible light, bird eyes have four cones that respond to ultraviolet (UV) as well as visible (red, green, blue; RGB) light. This tetrachromatic vision enables birds to acquire abundant visual information and effectively detect target objects in a dynamic environment (Figure 1c).

Innovative bird-eye-inspired camera developed for enhanced object detection
Figure 2. Bird-eye-inspired camera. (a) Schematic view of bird-eye-inspired camera. (b) Artificial fovea. (c) Schematic of a multispectral image sensor. (d) Multispectral image sensor. Credit: Science Robotics (2024). DOI: 10.1126/scirobotics.adk6903

Inspired by these capabilities, the IBS research team designed a new type of camera that specializes in object detection, incorporating artificial fovea and a multispectral image sensor that responds to both UV and RGB (Figure 2a).

First, the researchers fabricated the artificial fovea by mimicking the deep central foveae in the bird’s eyes (Figure 2b) and optimized the design through the optical simulation. This allows the camera to magnify distant target objects without image distortion.

The team then used perovskite, a material known for its excellent electrical and optical properties, to fabricate the multispectral image sensor. Four types of photodetectors were fabricated using different perovskite materials that absorb different wavelengths. The multispectral image sensor was finally fabricated by vertically stacking the four photodetectors (Figure 2c and 2d).

The first co-author Dr. Park Jinhong states, “We also developed a new transfer process to vertically stack the photodetectors. By using the perovskite patterning method developed in our previous research, we were able to fabricate the multispectral image sensor that can detect UV and RGB without additional color filters.”

Innovative bird-eye-inspired camera developed for enhanced object detection
Figure 3. Performance of the bird-eye-inspired camera. (a) Setup for measurement. (b) Bird-eye-inspired camera perceives both the distant object (star) through magnification in the foveal region and nearby objects (triangle, square, circle) in the peripheral region. (c, d) The multispectral image sensor can distinguish UV and RGB light without color filters and capture colored images. Credit: Science Robotics (2024). DOI: 10.1126/scirobotics.adk6903

Conventional cameras that use a zoom lens to magnify objects have the disadvantage of focusing only on the target object and not its surroundings. On the other hand, the bird-eye-inspired camera provides both a magnified view of the foveal region along with the surrounding view of the peripheral region (Figure 3a and 3b).

By comparing the two fields of vision, the bird-eye-inspired camera can achieve greater motion detection capabilities than the conventional camera (Figure 3c and 3d). In addition, the camera is more cost-effective and lightweight as it can distinguish UV and RGB light without additional color filters.

The research team verified the object recognition and motion detection capabilities of the developed camera through simulations. In terms of object recognition, the new camera demonstrated a confidence score of 0.76, which is about twice as high as the existing camera system’s confidence score of 0.39. The motion detection rate also increased by 3.6 times compared to the existing camera system, indicating significantly enhanced sensitivity to motion.

“Birds’ eyes have evolved to quickly and accurately detect distant objects while in flight. Our camera can be used in areas that need to detect objects clearly, such as robots and autonomous vehicles. In particular, the camera has great potential for application to drones operating in environments similar to those in which birds live,” remarked Prof. Kim.

This innovative camera technology represents a significant advancement in object detection, offering numerous potential applications across various industries.

More information:
Jinhong Park et al, Avian eye–inspired perovskite artificial vision system for foveated and multispectral imaging, Science Robotics (2024). DOI: 10.1126/scirobotics.adk6903

Citation:
Innovative bird eye–inspired camera developed for enhanced object detection (2024, May 30)
retrieved 24 June 2024
from https://techxplore.com/news/2024-05-bird-eyeinspired-camera.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

A virtual reality pegboard test shows performance does not always match user preference

0
A virtual reality pegboard test shows performance does not always match user preference


A virtual reality pegboard test shows performance does not always match user preference
VR pegboard image and study participant. Credit: Laurent Voisard et al

Virtual hand interactions are one of the most common and useful applications that virtual reality (VR) systems offer users. But, as a new Concordia-led study shows, personal preference remains an important factor in how the technology is applied, regardless of the effect on overall performance.

In a paper presented at the IEEE International Symposium on Mixed and Augmented Reality (ISMAR) in October 20203, the researchers shared their findings from experiments involving participants performing repetitive tasks on a VR-based Purdue Pegboard Test (PPT).

One of the many applications of PPT is as a therapeutic tool for patients who have suffered neurological damage, such as a stroke. It is designed to improve gross and fine motor skills.

The participants were equipped with a VR headset. They were then instructed to pick up a virtual object and place it in a hole as quickly and as accurately as possible. Variations involved using dominant and non-dominant hands, both hands and assembly tasks.

The tasks were repeated across three separate modes. In the first instance, the user’s virtual hand was opaque, meaning they could not through see through it. In the second instance, the outline of the user’s hand was visible but the hand itself was transparent. And in the third case, the hand disappeared once the peg was picked up.

Metrics such as duration, downtime, movement time, path length, linear velocity, angle and angular velocity were recorded.

The opaque hands were found to have performed noticeably slower. Users opened their fingers more narrowly and performed fewer tasks when compared to invisible hand visualization.

“This is what we hypothesized, because the invisible hand visualization does not occlude the object the participant is holding,” says lead author Laurent Voisard. “The invisible hand gives users more control and lets them see where they are placing their peg better. It also increases motor dexterity when performing movements requiring fine hand movements. This case could be used to create more effective and efficient medical applications in VR.

“But the participants did not all necessarily prefer the invisible hand,” he adds. “In fact, 10 participants said they preferred the transparent hand while seven chose the opaque hand. Seven others selected the invisible hand.”

A virtual reality pegboard test shows performance does not always match user preference
left to right: Transparent hand, opaque hand, invisible hand. Credit: Laurent Voisard et al

Participants who preferred the transparent hand emphasized that they felt the hands and the environment were easier to perceive at the same time. They also said it was easy to interact with the objects.

Participants who preferred the opaque hand said movements were easier to track and control. Conversely, participants who liked the invisible hand said they found it easier and more comfortable to accomplish the task and to understand when it was completed.

Personalizing home rehab

The researchers say they hope the study can serve as a basis for more research. Potential topics include how VR and PPT can be used therapeutically, and how they can be applied in technical fields such as surgery planning.

“Every individual is different, so they will have different preferences. That is why we recommend giving users the choice of how they visualize their VR experience,” says co-author Anil Ufuk Batmaz, an assistant professor the Department of Computer Science and Software Engineering at the Gina Cody School of Engineering and Computer Science. Batmaz is also the director of the EXIT Lab.

“One visualization may have better results. But if it is not preferred by users, then they may not use the system at all.”

“The PPT is often used as a diagnostic tool by neurologists for people who have suffered brain injuries or strokes. However, it can also be used for rehabilitation,” notes co-author Marta Kersten-Oertel, an associate professor in the same department and the director of the Applied Perception Lab.

“Studies like ours show the best interaction methods for doing this type of rehabilitation at home in a virtual environment.”

Amal Hatira and Mine Sarac at Kadir Has University in Istanbul, Turkey, also contributed to this study.

More information:
Laurent Voisard et al, Effects of Opaque, Transparent and Invisible Hand Visualization Styles on Motor Dexterity in a Virtual Reality Based Purdue Pegboard Test, 2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) (2023). DOI: 10.1109/ISMAR59233.2023.00087

Citation:
A virtual reality pegboard test shows performance does not always match user preference (2024, January 30)
retrieved 24 June 2024
from https://techxplore.com/news/2024-01-virtual-reality-pegboard-user.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Study reveals strategies for effective Industry 4.0 implementation

0
Study reveals strategies for effective Industry 4.0 implementation


architecture
Credit: Pixabay/CC0 Public Domain

Constructor University researchers, Prof. Dr.-Ing. Hendro Wicaksono, Linda Angreani and Annas Vijaya, published a study on Industry 4.0 technologies in the Journal of Manufacturing Technology Management.

Their research powerfully illustrates how companies can navigate the complexities of integrating advanced technologies, such as automation and the Internet of Things (IoT) into their manufacturing processes.

This research is unique as it is the first to explore the alignment between maturity models and reference architecture models, offering valuable insights for companies striving to enhance their Industry 4.0 adoption strategies.

The study does so by introducing a comprehensive maturity model to assess an industry’s readiness to adopt Industry 4.0, aligned with reference architecture models (RAMs) like RAMI4.0, NIST-SME, IMSA, IVRA, and IIRA, enabling better implementation strategies for companies.

“One of the significant findings is the identification of varied interpretations of Industry 4.0 maturity models within organizations. The research highlights the critical challenge of aligning these models with established RAMs, which is essential for a successful Industry 4.0 transformation,” write Angreani and Vijaya, both research associates under Prof Hendro Wicaksono, from Constructor University.

“Additionally, the study reveals that both maturity models and reference architectures often overlook human and cultural aspects, which are vital for effective implementation.”

More information:
Linda Salma Angreani et al, Enhancing strategy for Industry 4.0 implementation through maturity models and standard reference architectures alignment, Journal of Manufacturing Technology Management (2024). DOI: 10.1108/JMTM-07-2022-0269

Citation:
Study reveals strategies for effective Industry 4.0 implementation (2024, June 20)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-reveals-strategies-effective-industry.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link