Thursday, January 30, 2025
Home Blog Page 1651

Online toxicity can only be countered by humans and machines working together, say researchers

0
Online toxicity can only be countered by humans and machines working together, say researchers


online toxicity
Credit: Unsplash/CC0 Public Domain

Wading through the staggering amount of social media content being produced every second to find the nastiest bits is no task for humans alone.

Even with the newest deep-learning tools at their disposal, the employees who identify and review problematic posts can be overwhelmed and often traumatized by what they encounter every day. Gig-working annotators who analyze and label data to help improve machine learning can be paid pennies per unit worked.

In a Concordia-led paper published in IEEE Technology and Society Magazine, researchers argue that supporting these human workers is essential and requires a constant re-evaluation of the techniques and tools they use to identify toxic content.

The authors examine social, policy, and technical approaches to automatic toxicity detection and consider their shortcomings while also proposing potential solutions.

“We want to know how well current moderating techniques, which involve both machine learning and human annotators of toxic language, are working,” says Ketra Schmitt, one of the paper’s co-authors and an associate professor with the Centre for Engineering in Society at the Gina Cody School of Engineering and Computer Science.

She believes that human contributions will remain essential to moderation. While existing automated toxicity detection methods can and will improve, none is without error. Human decision-makers are essential to review decisions.

“Moderation efforts would be futile without machine learning because the volume is so enormous. But lost in the hype around artificial intelligence (AI) is the basic fact that machine learning requires a human annotator to work. We cannot remove either humans or the AI.”






Credit: Concordia University

Arezo Bodaghi is a research assistant at the Concordia Institute for Information Systems Engineering and the paper’s lead author. “We cannot simply rely on the current evaluation matrix found in machine and deep learning to identify toxic content,” Bodaghi adds. “We need them to be more accurate and multilingual as well.

“We also need them to be very fast, but they can lose accuracy when machine learning techniques are fast. There is a trade-off to be made.”

Broader input from diverse groups will help machine-learning tools become as inclusive and bias-free as possible. This includes recruiting workers who are non-English speakers and come from underrepresented groups such as LGBTQ2S+ and racialized communities. Their contributions can help improve the large language models and data sets used by machine-learning tools.

Keeping the online world social

The researchers offer several concrete recommendations companies can take to improve toxicity detection.

First and foremost is improving the working conditions for annotators. Many companies pay them by the unit of work rather than by the hour. Furthermore, these tasks can be easily offshored to workers demanding lower wages than their North American or European counterparts, so companies can wind up paying their employees less than a dollar an hour.

And little in the way of mental health treatment is offered even though these employees are front-line bulwarks against some of the most horrifying online content.

Companies can also deliberately build online platform cultures that prioritize kindness, care, and mutual respect as opposed to others such as Gab, 4chan, 8chan, and Truth Social, which celebrate toxicity.

Improving algorithmic approaches would help large language models reduce the number of errors made around misidentification and differentiating context and language.

Finally, corporate culture at the platform level has an impact at the user level.

When ownership deprioritizes or even eliminates user trust and safety teams, for instance, the effects can be felt company-wide and risk damaging morale and user experience.

“Recent events in the industry show why it is so important to have human workers who are respected, supported, paid decently, and have some safety to make their own judgments,” Schmitt concludes.

More information:
Arezo Bodaghi et al, Technological Solutions to Online Toxicity: Potential and Pitfalls, IEEE Technology and Society Magazine (2024). DOI: 10.1109/MTS.2023.3340235

Citation:
Online toxicity can only be countered by humans and machines working together, say researchers (2024, February 28)
retrieved 24 June 2024
from https://techxplore.com/news/2024-02-online-toxicity-countered-humans-machines.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

AmEx buys dining reservation company Tock from Squarespace for $400M

0
AmEx buys dining reservation company Tock from Squarespace for $400M


AmEx buys dining reservation company Tock from Squarespace for $400M
An American Express logo is attached to a door in Boston’s Seaport District, July 21, 2021. American Express announced Friday, June 21, 2024, it will acquire the dining reservation and event management platform Tock from Squarespace for $400 million cash. Credit: AP Photo/Steven Senne, File

American Express will acquire the dining reservation and event management platform Tock from Squarespace for $400 million cash.

AmEx began making acquisitions in the dining and event space with its purchase of Resy five years ago, giving cardmembers access to hard-to-get restaurants and locations. Other credit card issues have done the same. JPMorgan acquired The Infatuation as a lifestyle brand in 2021.

Tock, which launched in Chicago in 2014 and has been owned by Squarespace since 2021, provides reservation and table management services to roughly 7,000 restaurants and other venues. Restaurants signed up with Tock include Aquavit, the high end Nordic restaurant in New York, as well as the buzzy new restaurant Chez Noir in California.

Squarespace and Tock confirmed the deal Friday.

AmEx’s purchase of Resy five years ago raised a lot of eyebrows in both the credit card and dining industries, but it’s become a key part of how the company locks in high-end businesses to be either AmEx-exclusive merchants, or ones that give preferential treatment to AmEx cardmembers. The number of restaurants on the Resy platform has grown five fold since AmEx purchased the company.

AmEx also announced Friday it would buy Rooam, a contactless payment platform that is used heavily in stadiums and other entertainment venues. AmEx did not disclose how much it was paying for Rooam.

© 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.

Citation:
AmEx buys dining reservation company Tock from Squarespace for $400M (2024, June 21)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-amex-buys-dining-reservation-company.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Tunisian all-women’s team eye inventors’ prize for smart wheelchair

0
Tunisian all-women's team eye inventors' prize for smart wheelchair


Tunisian engineers Khaoula Ben Ahmed and Souleima Ben Tamime work on their team's new wheelchair system
Tunisian engineers Khaoula Ben Ahmed and Souleima Ben Tamime work on their team’s new wheelchair system.

A smart wheelchair system built by a team of young Tunisian women engineers has reached the finals for a prestigious European inventors’ prize, setting a hopeful precedent in a country embroiled in multiple crises.

Their project, Moovobrain, allows wheelchair users to move through a choice of touchpad, voice command, facial gestures or, most impressively, a headset that detects their brain signals.

It has been shortlisted from over 550 applicants for the final round of the Young Inventors Prize, launched by the European Patent Office in 2021.

This year marks “the first time a Tunisian and Arab team has reached the final” stage of the international competition, the EPO said in a statement.

The all-female team will compete against two other finalists, from the Netherlands and Ukraine, for the top prize on July 9 in Malta.

The inspiration for the Moovobrain app first came from co-founder Souleima Ben Temime, 28, whose uncle was “forced to use a wheelchair to move” after his upper body was paralyzed.

“There was a clear and urgent need in front of me,” she told AFP.

“I talked about it to my friends and we decided to use the digital health technologies … to make a product that could benefit a lot of people.”

Success against odds

The four inventors met at the Higher Institute of Medical Sciences in Tunis, where they began developing the Moovobrain prototype in 2017, before creating health-tech start-up Gewinner two years later.

The team’s international success comes despite Tunisia’s growing economic and political turmoil in recent years that has pushed thousands of Tunisians to seek a better life in Europe through perilous overseas journeys.

President Kais Saied, elected in October 2019, has launched a sweeping power grab since he sacked parliament in July 2021.

Tunisian engineer Souleima Ben Tamime tests a prototype of her team's new wheelchair system
Tunisian engineer Souleima Ben Tamime tests a prototype of her team’s new wheelchair system.

The political crisis has been compounded by a biting economic meltdown — but that has not dampened the young women’s spirits.

Rather, co-founder Khaoula Ben Ahmed, 28, is hopeful that reaching the finals in the Young Inventors competition will bring the team “visibility and credibility”.

“It’s not always easy to convince investors or wheelchair manufacturers that our solution is truly innovative and useful for people with reduced mobility,” she said.

For them, even “asking to be turned towards the television”, when they “cannot speak, no longer have any autonomy, can become very trying on a psychological level”, added Ben Ahmed.

Alongside Ben Ahmed and Ben Temime, the other team members are Sirine Ayari, 28, and Ghofrane Ayari, 27, who are not related.

‘Favorable ecosystem’

The Young Inventors Prize — which rewards “exceptional inventors under the age of 30” — awards a first prize of 20,000 euros ($21,600), a second of 10,000 euros and a third of 5,000 euros.

The team says being women was “an advantage” because they were able to take part in competitions for female engineers and receive specialized funding.

More than 44 percent of engineers in Tunisia are women, according to the United Nations, and Ben Ahmed says the country has “a favorable ecosystem” for start-ups despite its challenges.

Their start-up Gewinner will very soon deliver the first four wheelchairs equipped with the new technology to an organization for disabled people in Sousse, eastern Tunisia. They hope for feedback to improve the product.

Internationally, Gewinner is focusing on Europe and has already established a partnership with an Italian manufacturer in the short term.

The inventors say that even though each smart chair costs around 2,000 euros, they hope to ensure the technology is accessible to as many people as possible, including those in less well-off countries.

“In Tunisia, we have prepared 30 units, not with the idea that it will be the end user who will pay, but organizations supporting associations which will be able to sponsor the purchase of chairs or adaptation of our technology,” said Ben Ahmed.

© 2024 AFP

Citation:
Tunisian all-women’s team eye inventors’ prize for smart wheelchair (2024, June 10)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-tunisian-women-team-eye-inventors.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

People struggle to tell humans apart from ChatGPT in five-minute chat conversations, tests show

0
People struggle to tell humans apart from ChatGPT in five-minute chat conversations, tests show


People struggle to tell humans apart from ChatGPT in five-minute chat conversations
Pass rates (left) and interrogator confidence (right) for each witness type. Pass rates are the proportion of the time a witness type was judged to be human. Error bars represent 95% bootstrap confidence intervals. Significance stars above each bar indicate whether the pass rate was significantly different from 50%. Comparisons show significant differences in pass rates between witness types. Right: Confidence in human and AI judgements for each witness type. Each point represents a single game. Points further toward the left and right indicate higher confidence in AI and human verdicts respectively. Credit: Jones and Bergen.

Large language models (LLMs), such as the GPT-4 model underpinning the widely used conversational platform ChatGPT, have surprised users with their ability to understand written prompts and generate suitable responses in various languages. Some of us may thus wonder: are the texts and answers generated by these models so realistic that they could be mistaken for those written by humans?

Researchers at UC San Diego recently set out to try and answer this question, by running a Turing test, a well-known method named after computer scientist Alan Turing, designed to assess the extent to which a machine demonstrates human-like intelligence.

The findings of this test, outlined in a paper pre-published on the arXiv server, suggest that people find it difficult to distinguish between the GPT-4 model and a human agent when interacting with them as part of a 2-person conversation.

“The idea for this paper actually stemmed from a class that Ben was running on LLMs,” Cameron Jones, co-author of the paper, told Tech Xplore.

“In the first week we read some classic papers about the Turing test and we discussed whether an LLM could pass it and whether or not it would matter if it could. As far as I could tell, nobody had tried at that point, so I decided to build an experiment to test this as my class project, and we then went on to run the first public exploratory experiment.”

The first study carried out by Jones and supervised by Bergen, Prof. of Cognitive Science at UC San Diego, yielded some interesting results, suggesting that GPT-4 could pass as human in approximately 50% of interactions. Nonetheless, their exploratory experiment did not control well for some variables that could influence findings, thus they decided to carry out a second experiment, yielding the results presented in their recent paper.

“As we went through the process of running the studies we discovered that other people were also doing great work in this area, including Jannai et al’s ‘human or not’ game,” Jones said. “We created a 2-player online version of the game where human participants would either be matched up with another human or an AI model.”

During each trial of the two-player game used by Jones and his colleagues, a human interrogator interacts with a “witness,” who could be either human or an AI agent. The interrogator asks the witness a series of questions to try to determine whether it is human or not.

People struggle to tell humans apart from ChatGPT in five-minute chat conversations
A selection of conversations between human interrogators (green) and witnesses (grey). One of these four conversations is with a human witness, the rest are with AI. Interrogator verdicts and ground truth identities are below (to allow readers to indirectly participate). [A) Verdict: Human (100% confidence) Took a while to Google Addis Ababa. Ground Truth: GPT-4; B) Verdict: AI (100% confidence) Long time for responses, wouldn’t tell me a specific place they grew up. Ground Truth: Human; C) Verdict: Human (100% confidence) He seems very down to earth and speaks naturally. Ground Truth: GPT-3.5; D) Verdict: AI (67% confidence), Did not put forth any effort to convince me they were human and the responses were odd, Ground Truth: ELIZA.] Credit: Jones and Bergen.

“The conversations lasted for up to five minutes, and at the end the interrogator made a judgment about whether they thought the witness was a human or an AI,” Jones explained. “Within the five minutes, participants could talk about anything they wanted (except saying abusive things, which we used a filter to prevent).”

In this two-player computer game, the researchers deployed three different LLMs as potential witnesses, namely the GPT-4, GPT 3.5, and ELIZA models. They found that while the users could often determine that ELIZA and GPT-3.5 models were machines, their ability to figure out whether GPT-4 was a human or machine was no better than the probability they would get it right if picking randomly (i.e., by chance).

“Although real humans were actually more successful, persuading interrogators that they were human two thirds of the time, our results suggest that in the real-world people might not be able to reliably tell if they’re speaking to a human or an AI system,” Jones said.

“In fact, in the real world, people might be less aware of the possibility that they’re speaking to an AI system, so the rate of deception might be even higher. I think this could have implications for the kinds of things that AI systems will be used for, whether automating client-facing jobs, or being used for fraud or misinformation.”

The results of the Turing test run by Jones and Bergen suggest that LLMs, particularly GPT-4, have become hardly distinguishable from humans during brief chat conversations. These observations suggest that people might soon become increasingly distrustful of others they are interacting with online, as they might be increasingly unsure of whether they are human or bots.

The researchers are now planning to update and re-open the public Turing test they designed for this study, to test some additional hypotheses. Their future works could gather further interesting insight into the extent to which people can distinguish between humans and LLMs.

“We’re interested in running a three-person version of the game, where the interrogator speaks to a human and an AI system simultaneously and has to figure out who is who,” Jones added.

“We’re also interested in testing other kinds of AI setups, for example giving agents access to live news and weather, or a ‘scratchpad’ where they can take notes before they respond. Finally, we’re interested in testing whether AI’s persuasive capabilities extend to other areas, like convincing people to believe lies, vote for specific policies, or donate money to a cause.”

More information:
Cameron R. Jones et al, People cannot distinguish GPT-4 from a human in a Turing test, arXiv (2024). DOI: 10.48550/arxiv.2405.08007

Journal information:
arXiv


© 2024 Science X Network

Citation:
People struggle to tell humans apart from ChatGPT in five-minute chat conversations, tests show (2024, June 16)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-people-struggle-humans-chatgpt-minute.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

An AI system that offers emotional support via chat

0
An AI system that offers emotional support via chat


An AI system that offers emotional support via chat
Homepage of the EmoAda Software System. On the homepage, the digital persona will provide guidance, and a user can access various function modules by clicking on the relevant icons or options or using voice commands. Credit: Dong et al

The rapid advancement of natural language processing (NLP) models and large language models (LLMs) has enabled the development of new use-specific conversational agents designed to answer specific types of queries. These range from AI agents that offer academic support to platforms offering general financial, legal or medical advice.

Researchers at Hefei University of Technology and Hefei Comprehensive National Science Center recently worked to develop an AI-based platform that can provide non-professional, but potentially helpful, psychological support. Their paper, presented at the International Conference on Multimedia Modeling held in Amsterdam from Jan. 29 to Feb. 2, introduces EmoAda, a conversational system trained to engage in emotional conversations, offering low-cost and basic psychological support.

Our paper originated from a concern over the increasing prevalence of psychological disorders such as depression and anxiety, particularly following the COVID-19 pandemic, as well as the significant gap in the availability of professional psychological services,” Xiao Sun, co-author of the paper, told Tech Xplore.

“This work builds on various research efforts, such as those by Fei-Fei Li et al on measuring depression severity from spoken language and facial expressions, those by Xiao Sun et al on multimodal attention networks for personality assessment, and the development of AI-based emotional support systems, like Google’s LaMDA and OpenAI’s ChatGPT.”

The primary objective of this recent study was to create a cost-effective psychological support system that could perceive the emotions of users based on different inputs, producing personalized and insightful responses. This system is not designed to substitute professional help, but rather to alleviate stress and assist users in increasing their mental flexibility, a feature that has been associated with good mental health.

“EmoAda is a multimodal emotion interaction and psychological adaptation system designed to offer psychological support to individuals with limited access to mental health services,” Sun explained. “It works by collecting real-time multimodal data (audio, video, and text) from users, extracting emotional features, and using a multimodal large language model to analyze these features for real-time emotion recognition, psychological profiling, and guidance strategy planning.”

An AI system that offers emotional support via chat
The architecture of the Multimodal Emotion Interaction Large Language Model. The researchers used the open-source model baichuan13B-chat as the foundation, integrating deep feature vectors extracted from visual, text, and audio models through an MLP layer into baichuan13B-chat. They employed a Mixture-of-Modality Adaptation technique to achieve multimodal semantic alignment, enabling the LLM model to process multimodal information. The team also constructed a multimodal emotion fine-tuning dataset, including open-source PsyQA dataset and a team-collected dataset of psychological interview videos. Using HuggingGPT[6], they developed a multimodal fine-tuning instruction set to enhance the model’s multimodal interaction capabilities. The researchers are creating a psychological knowledge graph to improve the model’s accuracy in responding to psychological knowledge and reduce model hallucinations. By combining these various techniques, MEILLM can perform psychological assessments, conduct psychological interviews using psychological scales, and generate psychological assessment reports for users. MEILLM can also create comprehensive psychological profiles for users, including emotions, moods, and personality, and provides personalized psychological guidance based on the user’s psychological profile. MEILLM offers users a more natural and humanized emotional support dialogue and a personalized psychological adaptation report. Credit: Dong et al

EmoAda, the platform created by Sun and his colleagues, can detect a user’s emotions by analyzing various types of sensory data, including their voice, video footage of their face, and text. Based on these analyses, the system produces personalized emotional support dialogues, delivering them via text or via a digital avatar.

Based on a user’s needs and the difficulties they mention, the platform might suggest various activities that might be beneficial. Some of these activities are facilitated by content available on the EmoAda platform, such as guided meditation practices and music for relaxation or stress relief.

“When tested with real users, EmoAda has been shown to provide natural and humanized psychological support,” Sun said. “In these trials, we found that some users prefer conversing with AI because it can significantly reduce their anxieties about privacy breaches and social pressure. Engaging in dialogues with AI creates a safe, non-judgmental environment where users can express their feelings and concerns without fear of being judged or misunderstood. AI systems like EmoAda also offer round-the-clock support, free from time constraints, which is a significant advantage for users who need help at any given moment.”

In initial test trials, the researchers found that one of the most appreciated aspects of EmoAda is its anonymity. In fact, users often mentioned that they felt comfortable with sharing private information that they would find difficult to discuss with other people face-to-face.

In the future, this new AI-based system could be deployed as a basic support service for people who cannot afford professional psychological care or are waiting to access available mental health services. In addition, EmoAda could serve as an inspiration for other research groups, paving way for the development of other AI-based mental health-related digital platforms.

“Our next studies will focus on addressing current system limitations, including optimizing the multimodal emotional interaction large language model to reduce misinformation generation, improve model interference performance, reduce costs, and integrate a psychological expert knowledge base to enhance system reliability and professionalism,” Sun added.

More information:
Tengteng Dong et al, EmoAda: A Multimodal Emotion Interaction and Psychological Adaptation System, MultiMedia Modeling (2024). DOI: 10.1007/978-3-031-53302-0_25

© 2024 Science X Network

Citation:
An AI system that offers emotional support via chat (2024, March 1)
retrieved 24 June 2024
from https://techxplore.com/news/2024-03-ai-emotional-chat.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link