Wednesday, March 19, 2025
Home Blog Page 2032

Scientists propose AI method that integrates habitual and goal-directed behaviors

0
Scientists propose AI method that integrates habitual and goal-directed behaviors


Simplicity versus adaptability: Understanding the balance between habitual and goal-directed behaviors
The Bayesian behavior framework. Credit: Nature Communications (2024). DOI: 10.1038/s41467-024-48577-7

Both living creatures and AI-driven machines need to act quickly and adaptively in response to situations. In psychology and neuroscience, behavior can be categorized into two types—habitual (fast and simple but inflexible), and goal-directed (flexible but complex and slower).

Daniel Kahneman, who won the Nobel Prize in Economic Sciences, distinguishes between these as System 1 and System 2. However, there is ongoing debate as to whether they are independent and conflicting entities or mutually supportive components.

Scientists from the Okinawa Institute of Science and Technology (OIST) and Microsoft Research Asia in Shanghai have proposed a new AI method in which systems of habitual and goal-directed behaviors learn to help each other.

Through computer simulations that mimicked the exploration of a maze, the method quickly adapts to changing environments and also reproduced the behavior of humans and animals after they had been accustomed to a certain environment for a long time.

The study, published in Nature Communications, not only paves the way for the development of systems that adapt quickly and reliably in the burgeoning field of AI, but also provides clues to how we make decisions in the fields of neuroscience and psychology.

The scientists derived a model that integrates habitual and goal-directed systems for learning behavior in AI agents that perform reinforcement learning, a method of learning based on rewards and punishments, based on the theory of “active inference,” which has been the focus of much attention recently.

In the paper, they created a computer simulation mimicking a task in which mice explore a maze based on visual cues and are rewarded with food when they reach the goal.

They examined how these two systems adapt and integrate while interacting with the environment, showing that they can achieve adaptive behavior quickly. It was observed that the AI agent collected data and improved its own behavior through reinforcement learning.

What our brains prefer

After a long day at work, we usually head home on autopilot (habitual behavior). However, if you have just moved house and are not paying attention, you might find yourself driving back to your old place out of habit.

When you catch yourself doing this, you switch gears (goal-directed behavior) and reroute to your new home. Traditionally, these two behaviors are considered to work independently, resulting in behavior being either habitual and fast but inflexible, or goal-directed and flexible but slow.

“The automatic transition from goal-directed to habitual behavior during learning is a very famous finding in psychology. Our model and simulations can explain why this happens: The brain would prefer behavior with higher certainty. As learning progresses, habitual behavior becomes less random, thereby increasing certainty. Therefore, the brain prefers to rely on habitual behavior after significant training,” Dr. Dongqi Han, a former Ph.D. student at OIST’s Cognitive Neurorobotics Research Unit and first author of the paper, explained.

For a new goal that AI has not trained for, it uses an internal model of the environment to plan its actions. It does not need to consider all possible actions but uses a combination of its habitual behaviors, which makes planning more efficient.

This challenges traditional AI approaches which require all possible goals to be explicitly included in training for them to be achieved. In this model, each desired goal can be achieved without explicit training but by flexibly combining learned knowledge.

“It’s important to achieve a kind of balance or trade-off between flexible and habitual behavior,” Prof. Jun Tani, head of the Cognitive Neurorobotics Research Unit stated. “There could be many possible ways to achieve a goal, but to consider all possible actions is very costly, therefore goal-directed behavior is limited by habitual behavior to narrow down options.”

Building better AI

Dr. Han got interested in neuroscience and the gap between artificial and human intelligence when he started working on AI algorithms. “I started thinking about how AI can behave more efficiently and adaptably, like humans. I wanted to understand the underlying mathematical principles and how we can use them to improve AI. That was the motivation for my Ph.D. research.”

Understanding the difference between habitual and goal-directed behaviors has important implications, especially in the field of neuroscience, because it can shed light on neurological disorders such as ADHD, OCD, and Parkinson’s disease.

“We are exploring the computational principles by which multiple systems in the brain work together. We have also seen that neuromodulators such as dopamine and serotonin play a crucial role in this process,” Prof. Kenji Doya, head of the Neural Computation Unit explained.

“AI systems developed with inspiration from the brain and proven capable of solving practical problems can serve as valuable tools in understanding what is happening in the brains of humans and animals.”

Dr. Han would like to help build better AI that can adapt their behavior to achieve complex goals.

“We are very interested in developing AI that have near human abilities when performing everyday tasks, so we want to address this human-AI gap. Our brains have two learning mechanisms, and we need to better understand how they work together to achieve our goal.”

More information:
Dongqi Han et al, Synergizing habits and goals with variational Bayes, Nature Communications (2024). DOI: 10.1038/s41467-024-48577-7

Citation:
Simplicity versus adaptability: Scientists propose AI method that integrates habitual and goal-directed behaviors (2024, June 14)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-simplicity-scientists-ai-method-habitual.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Online toxicity can only be countered by humans and machines working together, say researchers

0
Online toxicity can only be countered by humans and machines working together, say researchers


online toxicity
Credit: Unsplash/CC0 Public Domain

Wading through the staggering amount of social media content being produced every second to find the nastiest bits is no task for humans alone.

Even with the newest deep-learning tools at their disposal, the employees who identify and review problematic posts can be overwhelmed and often traumatized by what they encounter every day. Gig-working annotators who analyze and label data to help improve machine learning can be paid pennies per unit worked.

In a Concordia-led paper published in IEEE Technology and Society Magazine, researchers argue that supporting these human workers is essential and requires a constant re-evaluation of the techniques and tools they use to identify toxic content.

The authors examine social, policy, and technical approaches to automatic toxicity detection and consider their shortcomings while also proposing potential solutions.

“We want to know how well current moderating techniques, which involve both machine learning and human annotators of toxic language, are working,” says Ketra Schmitt, one of the paper’s co-authors and an associate professor with the Centre for Engineering in Society at the Gina Cody School of Engineering and Computer Science.

She believes that human contributions will remain essential to moderation. While existing automated toxicity detection methods can and will improve, none is without error. Human decision-makers are essential to review decisions.

“Moderation efforts would be futile without machine learning because the volume is so enormous. But lost in the hype around artificial intelligence (AI) is the basic fact that machine learning requires a human annotator to work. We cannot remove either humans or the AI.”






Credit: Concordia University

Arezo Bodaghi is a research assistant at the Concordia Institute for Information Systems Engineering and the paper’s lead author. “We cannot simply rely on the current evaluation matrix found in machine and deep learning to identify toxic content,” Bodaghi adds. “We need them to be more accurate and multilingual as well.

“We also need them to be very fast, but they can lose accuracy when machine learning techniques are fast. There is a trade-off to be made.”

Broader input from diverse groups will help machine-learning tools become as inclusive and bias-free as possible. This includes recruiting workers who are non-English speakers and come from underrepresented groups such as LGBTQ2S+ and racialized communities. Their contributions can help improve the large language models and data sets used by machine-learning tools.

Keeping the online world social

The researchers offer several concrete recommendations companies can take to improve toxicity detection.

First and foremost is improving the working conditions for annotators. Many companies pay them by the unit of work rather than by the hour. Furthermore, these tasks can be easily offshored to workers demanding lower wages than their North American or European counterparts, so companies can wind up paying their employees less than a dollar an hour.

And little in the way of mental health treatment is offered even though these employees are front-line bulwarks against some of the most horrifying online content.

Companies can also deliberately build online platform cultures that prioritize kindness, care, and mutual respect as opposed to others such as Gab, 4chan, 8chan, and Truth Social, which celebrate toxicity.

Improving algorithmic approaches would help large language models reduce the number of errors made around misidentification and differentiating context and language.

Finally, corporate culture at the platform level has an impact at the user level.

When ownership deprioritizes or even eliminates user trust and safety teams, for instance, the effects can be felt company-wide and risk damaging morale and user experience.

“Recent events in the industry show why it is so important to have human workers who are respected, supported, paid decently, and have some safety to make their own judgments,” Schmitt concludes.

More information:
Arezo Bodaghi et al, Technological Solutions to Online Toxicity: Potential and Pitfalls, IEEE Technology and Society Magazine (2024). DOI: 10.1109/MTS.2023.3340235

Citation:
Online toxicity can only be countered by humans and machines working together, say researchers (2024, February 28)
retrieved 24 June 2024
from https://techxplore.com/news/2024-02-online-toxicity-countered-humans-machines.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

AmEx buys dining reservation company Tock from Squarespace for $400M

0
AmEx buys dining reservation company Tock from Squarespace for $400M


AmEx buys dining reservation company Tock from Squarespace for $400M
An American Express logo is attached to a door in Boston’s Seaport District, July 21, 2021. American Express announced Friday, June 21, 2024, it will acquire the dining reservation and event management platform Tock from Squarespace for $400 million cash. Credit: AP Photo/Steven Senne, File

American Express will acquire the dining reservation and event management platform Tock from Squarespace for $400 million cash.

AmEx began making acquisitions in the dining and event space with its purchase of Resy five years ago, giving cardmembers access to hard-to-get restaurants and locations. Other credit card issues have done the same. JPMorgan acquired The Infatuation as a lifestyle brand in 2021.

Tock, which launched in Chicago in 2014 and has been owned by Squarespace since 2021, provides reservation and table management services to roughly 7,000 restaurants and other venues. Restaurants signed up with Tock include Aquavit, the high end Nordic restaurant in New York, as well as the buzzy new restaurant Chez Noir in California.

Squarespace and Tock confirmed the deal Friday.

AmEx’s purchase of Resy five years ago raised a lot of eyebrows in both the credit card and dining industries, but it’s become a key part of how the company locks in high-end businesses to be either AmEx-exclusive merchants, or ones that give preferential treatment to AmEx cardmembers. The number of restaurants on the Resy platform has grown five fold since AmEx purchased the company.

AmEx also announced Friday it would buy Rooam, a contactless payment platform that is used heavily in stadiums and other entertainment venues. AmEx did not disclose how much it was paying for Rooam.

© 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.

Citation:
AmEx buys dining reservation company Tock from Squarespace for $400M (2024, June 21)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-amex-buys-dining-reservation-company.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Tunisian all-women’s team eye inventors’ prize for smart wheelchair

0
Tunisian all-women's team eye inventors' prize for smart wheelchair


Tunisian engineers Khaoula Ben Ahmed and Souleima Ben Tamime work on their team's new wheelchair system
Tunisian engineers Khaoula Ben Ahmed and Souleima Ben Tamime work on their team’s new wheelchair system.

A smart wheelchair system built by a team of young Tunisian women engineers has reached the finals for a prestigious European inventors’ prize, setting a hopeful precedent in a country embroiled in multiple crises.

Their project, Moovobrain, allows wheelchair users to move through a choice of touchpad, voice command, facial gestures or, most impressively, a headset that detects their brain signals.

It has been shortlisted from over 550 applicants for the final round of the Young Inventors Prize, launched by the European Patent Office in 2021.

This year marks “the first time a Tunisian and Arab team has reached the final” stage of the international competition, the EPO said in a statement.

The all-female team will compete against two other finalists, from the Netherlands and Ukraine, for the top prize on July 9 in Malta.

The inspiration for the Moovobrain app first came from co-founder Souleima Ben Temime, 28, whose uncle was “forced to use a wheelchair to move” after his upper body was paralyzed.

“There was a clear and urgent need in front of me,” she told AFP.

“I talked about it to my friends and we decided to use the digital health technologies … to make a product that could benefit a lot of people.”

Success against odds

The four inventors met at the Higher Institute of Medical Sciences in Tunis, where they began developing the Moovobrain prototype in 2017, before creating health-tech start-up Gewinner two years later.

The team’s international success comes despite Tunisia’s growing economic and political turmoil in recent years that has pushed thousands of Tunisians to seek a better life in Europe through perilous overseas journeys.

President Kais Saied, elected in October 2019, has launched a sweeping power grab since he sacked parliament in July 2021.

Tunisian engineer Souleima Ben Tamime tests a prototype of her team's new wheelchair system
Tunisian engineer Souleima Ben Tamime tests a prototype of her team’s new wheelchair system.

The political crisis has been compounded by a biting economic meltdown — but that has not dampened the young women’s spirits.

Rather, co-founder Khaoula Ben Ahmed, 28, is hopeful that reaching the finals in the Young Inventors competition will bring the team “visibility and credibility”.

“It’s not always easy to convince investors or wheelchair manufacturers that our solution is truly innovative and useful for people with reduced mobility,” she said.

For them, even “asking to be turned towards the television”, when they “cannot speak, no longer have any autonomy, can become very trying on a psychological level”, added Ben Ahmed.

Alongside Ben Ahmed and Ben Temime, the other team members are Sirine Ayari, 28, and Ghofrane Ayari, 27, who are not related.

‘Favorable ecosystem’

The Young Inventors Prize — which rewards “exceptional inventors under the age of 30” — awards a first prize of 20,000 euros ($21,600), a second of 10,000 euros and a third of 5,000 euros.

The team says being women was “an advantage” because they were able to take part in competitions for female engineers and receive specialized funding.

More than 44 percent of engineers in Tunisia are women, according to the United Nations, and Ben Ahmed says the country has “a favorable ecosystem” for start-ups despite its challenges.

Their start-up Gewinner will very soon deliver the first four wheelchairs equipped with the new technology to an organization for disabled people in Sousse, eastern Tunisia. They hope for feedback to improve the product.

Internationally, Gewinner is focusing on Europe and has already established a partnership with an Italian manufacturer in the short term.

The inventors say that even though each smart chair costs around 2,000 euros, they hope to ensure the technology is accessible to as many people as possible, including those in less well-off countries.

“In Tunisia, we have prepared 30 units, not with the idea that it will be the end user who will pay, but organizations supporting associations which will be able to sponsor the purchase of chairs or adaptation of our technology,” said Ben Ahmed.

© 2024 AFP

Citation:
Tunisian all-women’s team eye inventors’ prize for smart wheelchair (2024, June 10)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-tunisian-women-team-eye-inventors.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

People struggle to tell humans apart from ChatGPT in five-minute chat conversations, tests show

0
People struggle to tell humans apart from ChatGPT in five-minute chat conversations, tests show


People struggle to tell humans apart from ChatGPT in five-minute chat conversations
Pass rates (left) and interrogator confidence (right) for each witness type. Pass rates are the proportion of the time a witness type was judged to be human. Error bars represent 95% bootstrap confidence intervals. Significance stars above each bar indicate whether the pass rate was significantly different from 50%. Comparisons show significant differences in pass rates between witness types. Right: Confidence in human and AI judgements for each witness type. Each point represents a single game. Points further toward the left and right indicate higher confidence in AI and human verdicts respectively. Credit: Jones and Bergen.

Large language models (LLMs), such as the GPT-4 model underpinning the widely used conversational platform ChatGPT, have surprised users with their ability to understand written prompts and generate suitable responses in various languages. Some of us may thus wonder: are the texts and answers generated by these models so realistic that they could be mistaken for those written by humans?

Researchers at UC San Diego recently set out to try and answer this question, by running a Turing test, a well-known method named after computer scientist Alan Turing, designed to assess the extent to which a machine demonstrates human-like intelligence.

The findings of this test, outlined in a paper pre-published on the arXiv server, suggest that people find it difficult to distinguish between the GPT-4 model and a human agent when interacting with them as part of a 2-person conversation.

“The idea for this paper actually stemmed from a class that Ben was running on LLMs,” Cameron Jones, co-author of the paper, told Tech Xplore.

“In the first week we read some classic papers about the Turing test and we discussed whether an LLM could pass it and whether or not it would matter if it could. As far as I could tell, nobody had tried at that point, so I decided to build an experiment to test this as my class project, and we then went on to run the first public exploratory experiment.”

The first study carried out by Jones and supervised by Bergen, Prof. of Cognitive Science at UC San Diego, yielded some interesting results, suggesting that GPT-4 could pass as human in approximately 50% of interactions. Nonetheless, their exploratory experiment did not control well for some variables that could influence findings, thus they decided to carry out a second experiment, yielding the results presented in their recent paper.

“As we went through the process of running the studies we discovered that other people were also doing great work in this area, including Jannai et al’s ‘human or not’ game,” Jones said. “We created a 2-player online version of the game where human participants would either be matched up with another human or an AI model.”

During each trial of the two-player game used by Jones and his colleagues, a human interrogator interacts with a “witness,” who could be either human or an AI agent. The interrogator asks the witness a series of questions to try to determine whether it is human or not.

People struggle to tell humans apart from ChatGPT in five-minute chat conversations
A selection of conversations between human interrogators (green) and witnesses (grey). One of these four conversations is with a human witness, the rest are with AI. Interrogator verdicts and ground truth identities are below (to allow readers to indirectly participate). [A) Verdict: Human (100% confidence) Took a while to Google Addis Ababa. Ground Truth: GPT-4; B) Verdict: AI (100% confidence) Long time for responses, wouldn’t tell me a specific place they grew up. Ground Truth: Human; C) Verdict: Human (100% confidence) He seems very down to earth and speaks naturally. Ground Truth: GPT-3.5; D) Verdict: AI (67% confidence), Did not put forth any effort to convince me they were human and the responses were odd, Ground Truth: ELIZA.] Credit: Jones and Bergen.

“The conversations lasted for up to five minutes, and at the end the interrogator made a judgment about whether they thought the witness was a human or an AI,” Jones explained. “Within the five minutes, participants could talk about anything they wanted (except saying abusive things, which we used a filter to prevent).”

In this two-player computer game, the researchers deployed three different LLMs as potential witnesses, namely the GPT-4, GPT 3.5, and ELIZA models. They found that while the users could often determine that ELIZA and GPT-3.5 models were machines, their ability to figure out whether GPT-4 was a human or machine was no better than the probability they would get it right if picking randomly (i.e., by chance).

“Although real humans were actually more successful, persuading interrogators that they were human two thirds of the time, our results suggest that in the real-world people might not be able to reliably tell if they’re speaking to a human or an AI system,” Jones said.

“In fact, in the real world, people might be less aware of the possibility that they’re speaking to an AI system, so the rate of deception might be even higher. I think this could have implications for the kinds of things that AI systems will be used for, whether automating client-facing jobs, or being used for fraud or misinformation.”

The results of the Turing test run by Jones and Bergen suggest that LLMs, particularly GPT-4, have become hardly distinguishable from humans during brief chat conversations. These observations suggest that people might soon become increasingly distrustful of others they are interacting with online, as they might be increasingly unsure of whether they are human or bots.

The researchers are now planning to update and re-open the public Turing test they designed for this study, to test some additional hypotheses. Their future works could gather further interesting insight into the extent to which people can distinguish between humans and LLMs.

“We’re interested in running a three-person version of the game, where the interrogator speaks to a human and an AI system simultaneously and has to figure out who is who,” Jones added.

“We’re also interested in testing other kinds of AI setups, for example giving agents access to live news and weather, or a ‘scratchpad’ where they can take notes before they respond. Finally, we’re interested in testing whether AI’s persuasive capabilities extend to other areas, like convincing people to believe lies, vote for specific policies, or donate money to a cause.”

More information:
Cameron R. Jones et al, People cannot distinguish GPT-4 from a human in a Turing test, arXiv (2024). DOI: 10.48550/arxiv.2405.08007

Journal information:
arXiv


© 2024 Science X Network

Citation:
People struggle to tell humans apart from ChatGPT in five-minute chat conversations, tests show (2024, June 16)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-people-struggle-humans-chatgpt-minute.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link