Thursday, January 9, 2025
Home Blog Page 1478

Behavioral and computational study shows that social preferences can be inferred from decision speed alone

0
Behavioral and computational study shows that social preferences can be inferred from decision speed alone


The time it takes a person to decide can predict their preference
Considering the decision time of someone else can help us learn about their preference and make more accurate predictions about what they would choose in the future. Created using images from Pixabay (1, 2, 3, 4). Credit: Sophie Bavard and Pixabay (CC0, creativecommons.org/publicdomain/zero/1.0/)

Researchers led by Sophie Bavard at the University of Hamburg, Germany, found that people can infer hidden social preferences by observing how fast others make social decisions.

Publishing June 20 in the open-access journal PLOS Biology, the study shows that when someone knows the options being considered by another person, and they know how long it takes them to reach their decisions, they can use this information to predict the other person’s preference, even if they do not know what the actual choices were.

How do we know what someone’s social preferences or beliefs are when they are so often hidden and unspoken? While past studies have focused on observing another’s choices, the new study takes a deeper look by examining both choices and decision time.

The researchers asked participants to play the Dictator Game in which a so-called dictator must choose between two options to determine how much they will give away or keep for themselves. After playing the part of the dictator, the participants were asked to observe other dictators and predict the preferred give/take proportions.

The amount of information provided to the participants varied; sometimes they knew the decisions, sometimes the decision time, sometimes both, and sometimes neither.

The researchers hypothesized that even without knowing the decisions, if they could see the options and know the decision speed, participants would be able to predict the preferences.

A computational modeling analysis showed that in theory, dictator behavior could be predicted from decision times alone using a reinforcement learning model. But do people naturally internalize this type of mathematical model when observing others?

The answer was yes; the participants learned the dictator’s preferences when all they knew were the options and the decision times, although their predictions were best when they also knew the actual decisions. This indicates that time was used when decisions were not available, which expands our knowledge about decision making in social contexts.

The authors add, “Our findings challenge the conventional belief that choices alone are the only piece of information one can use to understand others’ social preferences.

“By incorporating response times into models of how people learn from each other, we can make more accurate predictions of human behavior, as response times provide a continuous measure that reveals the strength of these preferences, offering a more detailed perspective.”

More information:
Bavard S, Stuchlý E, Konovalov A, Gluth S (2024) Humans can infer social preferences from decision speed alone, PLoS Biology (2024). DOI: 10.1371/journal.pbio.3002686

Citation:
Behavioral and computational study shows that social preferences can be inferred from decision speed alone (2024, June 20)
retrieved 25 June 2024
from https://phys.org/news/2024-06-behavioral-social-inferred-decision.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Are your Cyber Monday purchases legit? There’s (going to be) an app for that

0
Are your Cyber Monday purchases legit? There’s (going to be) an app for that


Are your Cyber Monday purchases legit? There's (going to be) an app for that
Schematic showing the proposed process for image PUF generation and printing. Credit: Micromachines (2023). DOI: 10.3390/mi14091678

Receiving a bogus designer handbag or imitation Wagyu beef might infuriate a Cyber Monday consumer, but a knock-off respirator or a fake pacemaker could imperil them.

Virginia Tech researcher Emma Meno is developing a mobile app to empower buyers to ensure their purchases are legitimate. In a study published in Micromachines earlier this fall, Meno and a team of researchers described their work to date.

“Counterfeiters put things on the market that look like authorized medical devices or they intercept a legitimate transaction,” said Meno, a research associate at the Virginia Tech National Security Institute. “A fake biodevice is a huge health risk, and the growing number of people who are affected by this is worrisome.”

In the days leading up to Cyber Monday last year, law enforcement agencies took down almost 13,000 websites peddling phony luxury goods or pirated content. While there are steep legal ramifications for counterfeiting, they’re only enforceable if someone gets caught.

“Unfortunately, it’s easy for counterfeiters to evade detection: They can simply delete the website and create a new one,” said Meno, who is also a Commonwealth Cyber Initiative researcher supported by the Innovation: Ideation to Commercialization program, which seeks to move research from the lab to the marketplace.

To counter the counterfeiters, Meno and her team are developing a first-of-its-kind verification tool tailored specifically for the end user. First, a seller stamps the item with a special label printed with biodegradable ink. When the purchase arrives, the buyer can use a mobile device to scan the stamp like they would a QR code and receive confirmation that the item received matches what was sent.

Initially, Meno’s team is focusing on lower-risk areas such as luxury goods and working up to applications in the biomedical industries—but sellers in all these areas stand to benefit from this type of end-user product.

“It’s not only about a consumer’s peace of mind,” Meno said. “This is also a concern for industries who lose money or suffer brand defamation every time someone buys a knockoff.”

The app development builds on a collaboration between Virginia Tech researchers and colleagues from Virginia Commonwealth University who developed and tested the biodegradable ink that could be affixed to something like Wagyu beef, the highly sought-after meat costing upward of $200 per pound. The labels act as a non-cloneable “fingerprint,” and the team is currently working on how best to digitize them so they match to a single other key in a database, before being automatically deleted.

“If a user scans a certain label and it doesn’t show up in the database, then it’s likely a counterfeit,” Meno said. “Someone in the middle may have intercepted it.”

The mobile app aims to be able to defend against this type of attack and others as part of a larger effort to secure online transactions and protect businesses in cyberspace.

More information:
Sayantan Pradhan et al, Deep-Learning-Based Digitization of Protein-Self-Assembly to Print Biodegradable Physically Unclonable Labels for Device Security, Micromachines (2023). DOI: 10.3390/mi14091678

Provided by
Virginia Tech


Citation:
Are your Cyber Monday purchases legit? There’s (going to be) an app for that (2023, November 24)
retrieved 25 June 2024
from https://techxplore.com/news/2023-11-cyber-monday-legit-app.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Scientists identify safe havens we must preserve to prevent ‘the sixth great extinction of life on Earth’

0
Scientists identify safe havens we must preserve to prevent ‘the sixth great extinction of life on Earth’


Scientists identify safe havens we must preserve to prevent 'the sixth great extinction of life on Earth'
Conservation imperatives are an achievable and affordable solution for biodiversity protection. Credit: Dinerstein et al/Frontiers

In a new article, a coalition of conservationists and researchers have shown how we can protect Earth’s remaining biodiversity by conserving just a tiny percentage of the planet’s surface. This affordable, achievable plan would make it possible for us to preserve the most threatened species from extinction, safeguarding Earth’s wildlife for the future.

“Most species on Earth are rare, meaning that species either have very narrow ranges or they occur at very low densities or both,” said Dr. Eric Dinerstein of the NGO Resolve, lead author of the article in Frontiers in Science. “And rarity is very concentrated. In our study, zooming in on this rarity, we found that we need only about 1.2% of the Earth’s surface to head off the sixth great extinction of life on Earth.”

Prioritizing the planet

To meet ambitious conservation goals, an additional 1.2 million square kilometers of land were protected between 2018 and 2023. But do these new conservation areas effectively protect critical biodiversity? Dinerstein and his colleagues estimated that the 2018-2023 expansion only covered 0.11 million square kilometers with range-limited and threatened species. Planning protected areas is crucial, ensuring that we target our efforts and resources as effectively as possible.

The scientists started by mapping the entire world, using six layers of global biodiversity data. By combining these layers of data with maps of existing protected areas and a fractional land cover analysis, using satellite images to identify the remaining habitat available to rare and threatened species, the scientists were able to identify the most critical, currently unprotected areas of biodiversity. They called these Conservation Imperatives: a global blueprint to help countries and regions plan conservation at a more local level.

Scientists identify safe havens we must preserve to prevent 'the sixth great extinction of life on Earth'
Protecting 1.22% of land would help save the most threatened species from extinction. Credit: Dinerstein et al/Frontiers

These 16,825 sites, covering approximately 164 Mha, could prevent all predicted extinctions if they were adequately protected. Just protecting the sites found in the tropics could stave off most predicted extinctions. Additionally, 38% of Conservation Imperatives are very close to already-protected areas, which could make it easier to absorb them into protected areas or to find other ways of conserving them.

“These sites are home to over 4,700 threatened species in some of the world’s most biodiverse yet threatened ecosystems,” said Andy Lee of Resolve, a co-author. “These include not only mammals and birds that rely on large intact habitats, like the tamaraw in the Philippines and the Celebes crested macaque in Sulawesi, Indonesia, but also range-restricted amphibians and rare plant species.”

The cost of conservation

To calculate the price of this protection, the scientists modeled a cost estimate using data from hundreds of land protection projects over 14 years, and accounting for the type and amount of land acquired as well as country-specific economic factors. These numbers are approximate because a variety of land purchase or long-term lease options, each with different costs, might work well for protecting Conservation Imperatives. Stakeholders worldwide, including indigenous peoples, communities with jurisdiction over Conservation Imperative sites, and other members of civil society, will need to decide which options work best for them.

Scientists identify safe havens we must preserve to prevent 'the sixth great extinction of life on Earth'
Habitat for rare and endangered species is underrepresented in new protected areas. Credit: Dinerstein et al/Frontiers

“Our analysis estimated that protecting the Conservation Imperatives in the tropics would cost approximately $34 billion per year over the next five years,” said Lee. “This represents less than 0.2% of the United States’ GDP, less than 9% of the annual subsidies benefiting the global fossil fuel industry, and a fraction of the revenue generated from the mining and agroforestry industries each year.”

Preserving wildlife is also key to halting and reversing the climate crisis. Preserving biodiversity means protecting the Earth’s forest cover, which acts as a carbon sink: by conserving carbon-rich, wildlife-rich forested regions, we protect both threatened species and humans. While securing Conservation Imperatives is only part of the work—for example, just purchasing land won’t prevent poaching—it’s the first critical step we need to take.

“What will we bequeath to future generations? A healthy, vibrant Earth is critical for us to pass on,” said Dinerstein. “So we’ve got to get going. We’ve got to head off the extinction crisis. Conservation Imperatives drive us to do that.”

More information:
Conservation Imperatives: securing the last unprotected terrestrial sites harboring irreplaceable biodiversity, Frontiers in Science (2024). DOI: 10.3389/fsci.2024.1349350. www.frontiersin.org/journals/s … ci.2024.1349350/full

Citation:
Scientists identify safe havens we must preserve to prevent ‘the sixth great extinction of life on Earth’ (2024, June 25)
retrieved 25 June 2024
from https://phys.org/news/2024-06-scientists-safe-havens-sixth-great.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Meet CARMEN, a robot that helps people with mild cognitive impairment

0
Meet CARMEN, a robot that helps people with mild cognitive impairment


Meet CARMEN, a robot that helps people with mild cognitive impairment
Ph.D. student Anya Bouzida, one of the paper’s first authors, demonstrates how CARMEN works. Credit: David Baillot/University of California San Diego

Meet CARMEN, short for Cognitively Assistive Robot for Motivation and Neurorehabilitation—a small, tabletop robot designed to help people with mild cognitive impairment (MCI) learn skills to improve memory, attention, and executive functioning at home.

Unlike other robots in this space, CARMEN was developed by the research team at the University of California San Diego in collaboration with clinicians, people with MCI, and their care partners. To the best of the researchers’ knowledge, CARMEN is also the only robot that teaches compensatory cognitive strategies to help improve memory and executive function.

“We wanted to make sure we were providing meaningful and practical inventions,” said Laurel Riek, a professor of computer science and emergency medicine at UC San Diego and the work’s senior author.

MCI is an in-between stage between typical aging and dementia. It affects various areas of cognitive functioning, including memory, attention, and executive functioning. About 20% of individuals over 65 have the condition, with up to 15% transitioning to dementia each year. Existing pharmacological treatments have not been able to slow or prevent this evolution, but behavioral treatments can help.





Credit: University of California – San Diego

Researchers programmed CARMEN to deliver a series of simple cognitive training exercises. For example, the robot can teach participants to create routine places to leave important objects, such as keys; or learn note taking strategies to remember important things. CARMEN does this through interactive games and activities.

The research team designed CARMEN with a clear set of criteria in mind. It is important that people can use the robot independently, without clinician or researcher supervision. For this reason, CARMEN had to be plug and play, without many moving parts that require maintenance. The robot also has to be able to function with limited access to the internet, as many people do not have access to reliable connectivity.

CARMEN needs to be able to function over a long period of time. The robot also has to be able to communicate clearly with users; express compassion and empathy for a person’s situation; and provide breaks after challenging tasks to help sustain engagement.

Researchers deployed CARMEN for a week in the homes of several people with MCI, who then engaged in multiple tasks with the robot, such as identifying routine places to leave household items so they don’t get lost, and placing tasks on a calendar so they won’t be forgotten.

Researchers also deployed the robot in the homes of several clinicians with experience working with people with MCI. Both groups of participants completed questionnaires and interviews before and after the week-long deployments.

After the week with CARMEN, participants with MCI reported trying strategies and behaviors that they previously had written off as impossible. All participants reported that using the robot was easy. Two out of the three participants found the activities easy to understand, but one of the users struggled. All said they wanted more interaction with the robot.

“We found that CARMEN gave participants confidence to use cognitive strategies in their everyday life, and participants saw opportunities for CARMEN to exhibit greater levels of autonomy or be used for other applications,” the researchers write.

The research team presented their findings at the ACM/IEEE Human Robot Interaction (HRI) conference in March 2024, where they received a best paper award nomination.

Next steps include deploying the robot in a larger number of homes.

Researchers also plan to give CARMEN the ability to have conversations with users, with an emphasis on preserving privacy when these conversations happen. This is both an accessibility issue (as some users might not have the fine motor skills necessary to interact with CARMEN’s touch screen), as well as because most people expect to be able to have conversations with systems in their homes.

At the same time, researchers want to limit how much information CARMEN can give users. “We want to be mindful that the user still needs to do the bulk of the work, so the robot can only assist and not give too many hints,” Riek said.

Researchers are also exploring how CARMEN could assist users with other conditions, such as ADHD.

The UC San Diego team built CARMEN based on the FLEXI robot from the University of Washington. But they made substantial changes to its hardware, and wrote all its software from scratch. Researchers used ROS for the robot’s operating system.

Many elements of the project are available at GitHub.

More information:
Anya Bouzida et al, CARMEN: A Cognitively Assistive Robot for Personalized Neurorehabilitation at Home, Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction (2024). DOI: 10.1145/3610977.3634971

Citation:
Meet CARMEN, a robot that helps people with mild cognitive impairment (2024, June 24)
retrieved 25 June 2024
from https://techxplore.com/news/2024-06-carmen-robot-people-mild-cognitive.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Immersive engagement in mixed reality can be measured with reaction time

0
Immersive engagement in mixed reality can be measured with reaction time


Study: Immersive engagement in mixed reality can be measured with reaction time
The Reality-Virtuality Continuum. Virtual objects are colored green, and real-world objects are colored blue. Credit: arXiv: DOI: 10.48550/arxiv.2309.11662

In the real world/digital world cross-over of mixed reality, a user’s immersive engagement with the program is called presence. Now, UMass Amherst researchers are the first to identify reaction time as a potential presence measurement tool. Their findings, published in IEEE Transactions on Visualization and Computer Graphics, have implications for calibrating mixed reality to the user in real time.

“In virtual reality, the user is in the virtual world; they have no connection with their physical world around them,” explains Fatima Anwar, assistant professor of electrical and computer engineering, and an author on the paper.

“Mixed reality is a combination of both: You can see your physical world, but then on top of that, you have that spatially related information that is virtual.” She gives attaching a virtual keyboard onto a physical table as an example. This is similar to augmented reality but takes it a step further by making the digital elements more interactive with the user and the environment.

The uses for mixed reality are most obvious within gaming, but Anwar says that it’s rapidly expanding into other fields: academics, industry, construction and health care.

However, mixed reality experiences vary in quality: “Does the user feel that they are present in that environment? How immersive do they feel? And how does that impact their interactions with the environment?” asks Anwar. This is what is defined as “presence.”

Up to now, presence has been measured with subjective questionnaires after a user exits a mixed-reality program. Unfortunately, when presence is measured after the fact, it’s hard to capture a user’s feelings of the entire experience, especially during long exposure scenes. (Also, people are not very articulate in describing their feelings, making them an unreliable data source.) The ultimate goal is to have an instantaneous measure of presence so that the mixed reality simulation can be adjusted in the moment for optimal presence. “Oh, their presence is going down. Let’s do an intervention,” says Anwar.

Yasra Chandio, doctoral candidate in computer engineering and lead study author, gives medical procedures as an example of the importance of this real-time presence calibration: If a surgeon needs millimeter-level precision, they may use mixed reality as a guide to tell them exactly where they need to operate.

“If we just show the organ in front of them, and we don’t adjust for the height of the surgeon, for instance, that could be delaying the surgeon and could have inaccuracies for them,” she says. Low presence can also contribute to cybersickness, a feeling of dizziness or nausea that can occur in the body when a user’s bodily perception does not align with what they’re seeing. However, if the mixed reality system is internally monitoring presence, it can make adjustments in real-time, like moving the virtual organ rendering closer to eye level.

One marker within mixed reality that can be measured continuously and passively is reaction time, or how quickly a user interacts with the virtual elements. Through a series of experiments, the researchers determined that reaction time is associated with presence such that slow reaction time indicates low presence and high reaction time indicates high presence with 80% predictive accuracy even with the small dataset.

To test this, the researchers put participants in modified “Fruit Ninja” mixed reality scenarios (without the scoring), adjusting how authentic the digital elements appeared to manipulate presence.

Presence is a combination of two factors: place illusion and plausibility illusion. “First of all, virtual objects should look real,” says Anwar. That’s place illusion. “The objects should look at how physical things look, and the second thing is: are they behaving in a real way? Do they follow the laws of physics while they’re behaving in the real world?” This is plausibility illusion.

In one experiment, they adjusted place illusion and the fruit appeared either as lifelike fruit or abstract cartoons. In another experiment, they adjusted the plausibility illusion by showing mugs filling up with coffee either in the correct upright position or sideways.

Study: Immersive engagement in mixed reality can be measured with reaction time
Realistic, abstract, plausible, and implausible virtual objects used in the experiments. Credit: arXiv: DOI: 10.48550/arxiv.2309.11662

What they found: People were quicker in reacting to the lifelike fruit than they would to the cartoonish-looking food. And the same thing happened in the plausibility and implausible behavior of the coffee mug.

Reaction time is a good measure of presence because it highlights if the virtual elements are a tool or a distraction. “If a person is not feeling present, they would be looking into that environment and figuring out things,” explains Chandio. “Their cognition in perception is focused on something other than the task at hand, because they are figuring out what is going on.”

“Some people are going to argue, “Why would you not create the best scene in the first place?” but that’s because humans are very complex,” Chandio explains. “What works for me may not work for you may not work for Fatima, because we have different bodies, our hands move differently, we think of the world differently. We perceive differently.”

More information:
Yasra Chandio et al, Investigating the Correlation Between Presence and Reaction Time in Mixed Reality, IEEE Transactions on Visualization and Computer Graphics (2023). DOI: 10.1109/TVCG.2023.3319563. On arXiv: DOI: 10.48550/arxiv.2309.11662

Journal information:
arXiv


Citation:
Study: Immersive engagement in mixed reality can be measured with reaction time (2023, November 27)
retrieved 25 June 2024
from https://techxplore.com/news/2023-11-immersive-engagement-reality-reaction.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link