Tuesday, December 24, 2024
Home Blog Page 1382

AmEx buys dining reservation company Tock from Squarespace for $400M

0
AmEx buys dining reservation company Tock from Squarespace for $400M


AmEx buys dining reservation company Tock from Squarespace for $400M
An American Express logo is attached to a door in Boston’s Seaport District, July 21, 2021. American Express announced Friday, June 21, 2024, it will acquire the dining reservation and event management platform Tock from Squarespace for $400 million cash. Credit: AP Photo/Steven Senne, File

American Express will acquire the dining reservation and event management platform Tock from Squarespace for $400 million cash.

AmEx began making acquisitions in the dining and event space with its purchase of Resy five years ago, giving cardmembers access to hard-to-get restaurants and locations. Other credit card issues have done the same. JPMorgan acquired The Infatuation as a lifestyle brand in 2021.

Tock, which launched in Chicago in 2014 and has been owned by Squarespace since 2021, provides reservation and table management services to roughly 7,000 restaurants and other venues. Restaurants signed up with Tock include Aquavit, the high end Nordic restaurant in New York, as well as the buzzy new restaurant Chez Noir in California.

Squarespace and Tock confirmed the deal Friday.

AmEx’s purchase of Resy five years ago raised a lot of eyebrows in both the credit card and dining industries, but it’s become a key part of how the company locks in high-end businesses to be either AmEx-exclusive merchants, or ones that give preferential treatment to AmEx cardmembers. The number of restaurants on the Resy platform has grown five fold since AmEx purchased the company.

AmEx also announced Friday it would buy Rooam, a contactless payment platform that is used heavily in stadiums and other entertainment venues. AmEx did not disclose how much it was paying for Rooam.

© 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.

Citation:
AmEx buys dining reservation company Tock from Squarespace for $400M (2024, June 21)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-amex-buys-dining-reservation-company.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Tunisian all-women’s team eye inventors’ prize for smart wheelchair

0
Tunisian all-women's team eye inventors' prize for smart wheelchair


Tunisian engineers Khaoula Ben Ahmed and Souleima Ben Tamime work on their team's new wheelchair system
Tunisian engineers Khaoula Ben Ahmed and Souleima Ben Tamime work on their team’s new wheelchair system.

A smart wheelchair system built by a team of young Tunisian women engineers has reached the finals for a prestigious European inventors’ prize, setting a hopeful precedent in a country embroiled in multiple crises.

Their project, Moovobrain, allows wheelchair users to move through a choice of touchpad, voice command, facial gestures or, most impressively, a headset that detects their brain signals.

It has been shortlisted from over 550 applicants for the final round of the Young Inventors Prize, launched by the European Patent Office in 2021.

This year marks “the first time a Tunisian and Arab team has reached the final” stage of the international competition, the EPO said in a statement.

The all-female team will compete against two other finalists, from the Netherlands and Ukraine, for the top prize on July 9 in Malta.

The inspiration for the Moovobrain app first came from co-founder Souleima Ben Temime, 28, whose uncle was “forced to use a wheelchair to move” after his upper body was paralyzed.

“There was a clear and urgent need in front of me,” she told AFP.

“I talked about it to my friends and we decided to use the digital health technologies … to make a product that could benefit a lot of people.”

Success against odds

The four inventors met at the Higher Institute of Medical Sciences in Tunis, where they began developing the Moovobrain prototype in 2017, before creating health-tech start-up Gewinner two years later.

The team’s international success comes despite Tunisia’s growing economic and political turmoil in recent years that has pushed thousands of Tunisians to seek a better life in Europe through perilous overseas journeys.

President Kais Saied, elected in October 2019, has launched a sweeping power grab since he sacked parliament in July 2021.

Tunisian engineer Souleima Ben Tamime tests a prototype of her team's new wheelchair system
Tunisian engineer Souleima Ben Tamime tests a prototype of her team’s new wheelchair system.

The political crisis has been compounded by a biting economic meltdown — but that has not dampened the young women’s spirits.

Rather, co-founder Khaoula Ben Ahmed, 28, is hopeful that reaching the finals in the Young Inventors competition will bring the team “visibility and credibility”.

“It’s not always easy to convince investors or wheelchair manufacturers that our solution is truly innovative and useful for people with reduced mobility,” she said.

For them, even “asking to be turned towards the television”, when they “cannot speak, no longer have any autonomy, can become very trying on a psychological level”, added Ben Ahmed.

Alongside Ben Ahmed and Ben Temime, the other team members are Sirine Ayari, 28, and Ghofrane Ayari, 27, who are not related.

‘Favorable ecosystem’

The Young Inventors Prize — which rewards “exceptional inventors under the age of 30” — awards a first prize of 20,000 euros ($21,600), a second of 10,000 euros and a third of 5,000 euros.

The team says being women was “an advantage” because they were able to take part in competitions for female engineers and receive specialized funding.

More than 44 percent of engineers in Tunisia are women, according to the United Nations, and Ben Ahmed says the country has “a favorable ecosystem” for start-ups despite its challenges.

Their start-up Gewinner will very soon deliver the first four wheelchairs equipped with the new technology to an organization for disabled people in Sousse, eastern Tunisia. They hope for feedback to improve the product.

Internationally, Gewinner is focusing on Europe and has already established a partnership with an Italian manufacturer in the short term.

The inventors say that even though each smart chair costs around 2,000 euros, they hope to ensure the technology is accessible to as many people as possible, including those in less well-off countries.

“In Tunisia, we have prepared 30 units, not with the idea that it will be the end user who will pay, but organizations supporting associations which will be able to sponsor the purchase of chairs or adaptation of our technology,” said Ben Ahmed.

© 2024 AFP

Citation:
Tunisian all-women’s team eye inventors’ prize for smart wheelchair (2024, June 10)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-tunisian-women-team-eye-inventors.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

People struggle to tell humans apart from ChatGPT in five-minute chat conversations, tests show

0
People struggle to tell humans apart from ChatGPT in five-minute chat conversations, tests show


People struggle to tell humans apart from ChatGPT in five-minute chat conversations
Pass rates (left) and interrogator confidence (right) for each witness type. Pass rates are the proportion of the time a witness type was judged to be human. Error bars represent 95% bootstrap confidence intervals. Significance stars above each bar indicate whether the pass rate was significantly different from 50%. Comparisons show significant differences in pass rates between witness types. Right: Confidence in human and AI judgements for each witness type. Each point represents a single game. Points further toward the left and right indicate higher confidence in AI and human verdicts respectively. Credit: Jones and Bergen.

Large language models (LLMs), such as the GPT-4 model underpinning the widely used conversational platform ChatGPT, have surprised users with their ability to understand written prompts and generate suitable responses in various languages. Some of us may thus wonder: are the texts and answers generated by these models so realistic that they could be mistaken for those written by humans?

Researchers at UC San Diego recently set out to try and answer this question, by running a Turing test, a well-known method named after computer scientist Alan Turing, designed to assess the extent to which a machine demonstrates human-like intelligence.

The findings of this test, outlined in a paper pre-published on the arXiv server, suggest that people find it difficult to distinguish between the GPT-4 model and a human agent when interacting with them as part of a 2-person conversation.

“The idea for this paper actually stemmed from a class that Ben was running on LLMs,” Cameron Jones, co-author of the paper, told Tech Xplore.

“In the first week we read some classic papers about the Turing test and we discussed whether an LLM could pass it and whether or not it would matter if it could. As far as I could tell, nobody had tried at that point, so I decided to build an experiment to test this as my class project, and we then went on to run the first public exploratory experiment.”

The first study carried out by Jones and supervised by Bergen, Prof. of Cognitive Science at UC San Diego, yielded some interesting results, suggesting that GPT-4 could pass as human in approximately 50% of interactions. Nonetheless, their exploratory experiment did not control well for some variables that could influence findings, thus they decided to carry out a second experiment, yielding the results presented in their recent paper.

“As we went through the process of running the studies we discovered that other people were also doing great work in this area, including Jannai et al’s ‘human or not’ game,” Jones said. “We created a 2-player online version of the game where human participants would either be matched up with another human or an AI model.”

During each trial of the two-player game used by Jones and his colleagues, a human interrogator interacts with a “witness,” who could be either human or an AI agent. The interrogator asks the witness a series of questions to try to determine whether it is human or not.

People struggle to tell humans apart from ChatGPT in five-minute chat conversations
A selection of conversations between human interrogators (green) and witnesses (grey). One of these four conversations is with a human witness, the rest are with AI. Interrogator verdicts and ground truth identities are below (to allow readers to indirectly participate). [A) Verdict: Human (100% confidence) Took a while to Google Addis Ababa. Ground Truth: GPT-4; B) Verdict: AI (100% confidence) Long time for responses, wouldn’t tell me a specific place they grew up. Ground Truth: Human; C) Verdict: Human (100% confidence) He seems very down to earth and speaks naturally. Ground Truth: GPT-3.5; D) Verdict: AI (67% confidence), Did not put forth any effort to convince me they were human and the responses were odd, Ground Truth: ELIZA.] Credit: Jones and Bergen.

“The conversations lasted for up to five minutes, and at the end the interrogator made a judgment about whether they thought the witness was a human or an AI,” Jones explained. “Within the five minutes, participants could talk about anything they wanted (except saying abusive things, which we used a filter to prevent).”

In this two-player computer game, the researchers deployed three different LLMs as potential witnesses, namely the GPT-4, GPT 3.5, and ELIZA models. They found that while the users could often determine that ELIZA and GPT-3.5 models were machines, their ability to figure out whether GPT-4 was a human or machine was no better than the probability they would get it right if picking randomly (i.e., by chance).

“Although real humans were actually more successful, persuading interrogators that they were human two thirds of the time, our results suggest that in the real-world people might not be able to reliably tell if they’re speaking to a human or an AI system,” Jones said.

“In fact, in the real world, people might be less aware of the possibility that they’re speaking to an AI system, so the rate of deception might be even higher. I think this could have implications for the kinds of things that AI systems will be used for, whether automating client-facing jobs, or being used for fraud or misinformation.”

The results of the Turing test run by Jones and Bergen suggest that LLMs, particularly GPT-4, have become hardly distinguishable from humans during brief chat conversations. These observations suggest that people might soon become increasingly distrustful of others they are interacting with online, as they might be increasingly unsure of whether they are human or bots.

The researchers are now planning to update and re-open the public Turing test they designed for this study, to test some additional hypotheses. Their future works could gather further interesting insight into the extent to which people can distinguish between humans and LLMs.

“We’re interested in running a three-person version of the game, where the interrogator speaks to a human and an AI system simultaneously and has to figure out who is who,” Jones added.

“We’re also interested in testing other kinds of AI setups, for example giving agents access to live news and weather, or a ‘scratchpad’ where they can take notes before they respond. Finally, we’re interested in testing whether AI’s persuasive capabilities extend to other areas, like convincing people to believe lies, vote for specific policies, or donate money to a cause.”

More information:
Cameron R. Jones et al, People cannot distinguish GPT-4 from a human in a Turing test, arXiv (2024). DOI: 10.48550/arxiv.2405.08007

Journal information:
arXiv


© 2024 Science X Network

Citation:
People struggle to tell humans apart from ChatGPT in five-minute chat conversations, tests show (2024, June 16)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-people-struggle-humans-chatgpt-minute.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

An AI system that offers emotional support via chat

0
An AI system that offers emotional support via chat


An AI system that offers emotional support via chat
Homepage of the EmoAda Software System. On the homepage, the digital persona will provide guidance, and a user can access various function modules by clicking on the relevant icons or options or using voice commands. Credit: Dong et al

The rapid advancement of natural language processing (NLP) models and large language models (LLMs) has enabled the development of new use-specific conversational agents designed to answer specific types of queries. These range from AI agents that offer academic support to platforms offering general financial, legal or medical advice.

Researchers at Hefei University of Technology and Hefei Comprehensive National Science Center recently worked to develop an AI-based platform that can provide non-professional, but potentially helpful, psychological support. Their paper, presented at the International Conference on Multimedia Modeling held in Amsterdam from Jan. 29 to Feb. 2, introduces EmoAda, a conversational system trained to engage in emotional conversations, offering low-cost and basic psychological support.

Our paper originated from a concern over the increasing prevalence of psychological disorders such as depression and anxiety, particularly following the COVID-19 pandemic, as well as the significant gap in the availability of professional psychological services,” Xiao Sun, co-author of the paper, told Tech Xplore.

“This work builds on various research efforts, such as those by Fei-Fei Li et al on measuring depression severity from spoken language and facial expressions, those by Xiao Sun et al on multimodal attention networks for personality assessment, and the development of AI-based emotional support systems, like Google’s LaMDA and OpenAI’s ChatGPT.”

The primary objective of this recent study was to create a cost-effective psychological support system that could perceive the emotions of users based on different inputs, producing personalized and insightful responses. This system is not designed to substitute professional help, but rather to alleviate stress and assist users in increasing their mental flexibility, a feature that has been associated with good mental health.

“EmoAda is a multimodal emotion interaction and psychological adaptation system designed to offer psychological support to individuals with limited access to mental health services,” Sun explained. “It works by collecting real-time multimodal data (audio, video, and text) from users, extracting emotional features, and using a multimodal large language model to analyze these features for real-time emotion recognition, psychological profiling, and guidance strategy planning.”

An AI system that offers emotional support via chat
The architecture of the Multimodal Emotion Interaction Large Language Model. The researchers used the open-source model baichuan13B-chat as the foundation, integrating deep feature vectors extracted from visual, text, and audio models through an MLP layer into baichuan13B-chat. They employed a Mixture-of-Modality Adaptation technique to achieve multimodal semantic alignment, enabling the LLM model to process multimodal information. The team also constructed a multimodal emotion fine-tuning dataset, including open-source PsyQA dataset and a team-collected dataset of psychological interview videos. Using HuggingGPT[6], they developed a multimodal fine-tuning instruction set to enhance the model’s multimodal interaction capabilities. The researchers are creating a psychological knowledge graph to improve the model’s accuracy in responding to psychological knowledge and reduce model hallucinations. By combining these various techniques, MEILLM can perform psychological assessments, conduct psychological interviews using psychological scales, and generate psychological assessment reports for users. MEILLM can also create comprehensive psychological profiles for users, including emotions, moods, and personality, and provides personalized psychological guidance based on the user’s psychological profile. MEILLM offers users a more natural and humanized emotional support dialogue and a personalized psychological adaptation report. Credit: Dong et al

EmoAda, the platform created by Sun and his colleagues, can detect a user’s emotions by analyzing various types of sensory data, including their voice, video footage of their face, and text. Based on these analyses, the system produces personalized emotional support dialogues, delivering them via text or via a digital avatar.

Based on a user’s needs and the difficulties they mention, the platform might suggest various activities that might be beneficial. Some of these activities are facilitated by content available on the EmoAda platform, such as guided meditation practices and music for relaxation or stress relief.

“When tested with real users, EmoAda has been shown to provide natural and humanized psychological support,” Sun said. “In these trials, we found that some users prefer conversing with AI because it can significantly reduce their anxieties about privacy breaches and social pressure. Engaging in dialogues with AI creates a safe, non-judgmental environment where users can express their feelings and concerns without fear of being judged or misunderstood. AI systems like EmoAda also offer round-the-clock support, free from time constraints, which is a significant advantage for users who need help at any given moment.”

In initial test trials, the researchers found that one of the most appreciated aspects of EmoAda is its anonymity. In fact, users often mentioned that they felt comfortable with sharing private information that they would find difficult to discuss with other people face-to-face.

In the future, this new AI-based system could be deployed as a basic support service for people who cannot afford professional psychological care or are waiting to access available mental health services. In addition, EmoAda could serve as an inspiration for other research groups, paving way for the development of other AI-based mental health-related digital platforms.

“Our next studies will focus on addressing current system limitations, including optimizing the multimodal emotional interaction large language model to reduce misinformation generation, improve model interference performance, reduce costs, and integrate a psychological expert knowledge base to enhance system reliability and professionalism,” Sun added.

More information:
Tengteng Dong et al, EmoAda: A Multimodal Emotion Interaction and Psychological Adaptation System, MultiMedia Modeling (2024). DOI: 10.1007/978-3-031-53302-0_25

© 2024 Science X Network

Citation:
An AI system that offers emotional support via chat (2024, March 1)
retrieved 24 June 2024
from https://techxplore.com/news/2024-03-ai-emotional-chat.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Researchers develop Superman-inspired imager chip for mobile devices

0
Researchers develop Superman-inspired imager chip for mobile devices


Researchers make big strides with Superman-inspired imager chip
Researchers, including electrical engineering graduate student Walter Sosa Portillo BS’21 (left) and Dr. Kenneth K. O, have made advances to miniaturize an imager chip inspired by Superman’s X-ray vision for handheld mobile devices. Credit: University of Texas at Dallas

Researchers from The University of Texas at Dallas and Seoul National University have developed an imager chip inspired by Superman’s X-ray vision that could be used in mobile devices to make it possible to detect objects inside packages or behind walls.

Chip-enabled cellphones might be used to find studs, wooden beams or wiring behind walls, cracks in pipes, or outlines of contents in envelopes and packages. The technology also could have medical applications.

The researchers first demonstrated the imaging technology in a 2022 study. Their latest paper, published in the March print edition of IEEE Transactions on Terahertz Science and Technology, shows how researchers solved one of their biggest challenges: making the technology small enough for handheld mobile devices while improving image quality.

“This technology is like Superman’s X-ray vision. Of course, we use signals at 200 gigahertz to 400 gigahertz instead of X-rays, which can be harmful,” said Dr. Kenneth K. O, director of the Texas Analog Center of Excellence (TxACE) and the Texas Instruments Distinguished University Chair in the Erik Jonsson School of Engineering and Computer Science.

The research was supported by the Texas Instruments (TI) Foundational Technology Research Program on Millimeter Wave and High Frequency Microsystems and the Samsung Global Research Outreach Program.

“It took 15 years of research that improved pixel performance by 100 million times, combined with digital signal processing techniques, to make this imaging demonstration possible. This disruptive technology shows the potential capability of true THz imaging,” said Dr. Brian Ginsburg, director of RF/mmW and high-speed research at TI’s Kilby Labs.

Researchers make big strides with Superman-inspired imager chip
Credit: University of Texas at Dallas

With privacy issues in mind, the researchers designed the technology for use only at close range, about 1 inch from an object. For example, if a thief tried to scan the contents of someone’s bag, the thief would need to be so close that the person would be aware of what they were doing, Dr. O said. The next iteration of the imager chip should be able to capture images up to 5 inches away and make it easier to see smaller objects.

The imager emits 300-GHz signals in the millimeter-wave band of electromagnetic frequencies between microwave and infrared, which the human eye cannot see and is considered safe for humans. A similar technology, using microwaves, is used in large, stationary passenger screeners in airports.

“We designed the chip without lenses or optics so that it could fit into a mobile device. The pixels, which create images by detecting signals reflected from a target object, have the shape of a 0.5-mm square, about the size of a grain of sand,” said Dr. Wooyeol Choi, assistant professor at Seoul National University and the corresponding author of the latest paper.

The advances to miniaturize the imager chip for mobile devices are the result of nearly two decades of research by Dr. O and his team of students, researchers and collaborators through the TxACE at UT Dallas.

A study author and electrical engineering graduate student Walter Sosa Portillo BS’21 came to work in O’s lab as an undergraduate after learning about this imaging research.

“The first day I came to orientation, they talked about Dr. O’s research, and I thought it was really interesting and pretty cool to be able to see through things,” said Portillo, who is researching medical applications for the imager.

More information:
Pranith Reddy Byreddy et al, Array of 296-GHz CMOS Concurrent Transceiver Pixels With Phase and Amplitude Extraction Circuits for Improving Reflection-Mode Imaging Resolution, IEEE Transactions on Terahertz Science and Technology (2024). DOI: 10.1109/TTHZ.2024.3350515

Citation:
Researchers develop Superman-inspired imager chip for mobile devices (2024, June 10)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-superman-imager-chip-mobile-devices.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link