Friday, November 22, 2024
Home Blog Page 1125

Future-self chatbot gives users a glimpse of the life ahead of them

0
Future-self chatbot gives users a glimpse of the life ahead of them


Future-self chatbot gives users a glimpse of the life ahead of them
“Future You” is an interactive chat platform that allows users to chat with a relatable yet virtual version of their future selves in real time via a large language model that has been personalized based on a pre-intervention survey centered on user future goals and personal qualities. To make the conversation realistic, the system generates an individualized synthetic memory for the user’s future self that contains a backstory for the user at age 60. To increase the believability of the future-self character, the system applies age progress to the user’s portrait. Credit: arXiv (2024). DOI: 10.48550/arxiv.2405.12514

A team of AI researchers with members from several institutions in the U.S. and KASIKORN Labs, in Thailand, has built an AI-based chatbot that allows users to chat with a potential version of their future selves.

The group has published a paper on the arXiv preprint server describing the technology and how it has been received by volunteers who interacted with the system.

As chatbots grow more sophisticated, system builders have begun looking for new ways to use them. In this new effort, the team, based at MIT, built a chatbot that gives users a sense of their own fate by allowing them to chat with a potential future version of themselves.

Prior research has shown that when younger people spend time talking with older people, they often come away with a broader outlook on life and how their own future might unfold. Thinking that young people would benefit even more if they could talk to their future, older selves, the researchers set out to build a system that would mimic such an opportunity.

To create the chatbot, the researchers put together several modules, the first of which involved building a regular chatbot that asked users a series of questions about themselves and the people in their lives. It also asked about their background, their hopes and plans for the future and their vision of an idealized life.

The same chatbot also asked users to submit a current picture of themselves. A separate routine aged the photo, allowing the user to see what they might look like in the distant future.

The second module fed the information from the first module to a separate language module that generated “memories” based on experiences of others that were mixed with some of the events and experiences of the original user.

The third module was the future chatbot. It applied the results of the first two modules as it interacted with the same user, giving future, experienced-based answers to questions.

The research team tested the system using themselves as guinea pigs and then asked 344 volunteers to give it a go as well and report how it went.

The research team found mostly positive results—most users reported feeling more optimistic about their future and more connected to their future selves. And one of the researchers, after a session with the new bot, found himself more aware of the limited amount of time he would have with his parents and began to spend more time with them.

More information:
Pat Pataranutaporn et al, Future You: A Conversation with an AI-Generated Future Self Reduces Anxiety, Negative Emotions, and Increases Future Self-Continuity, arXiv (2024). DOI: 10.48550/arxiv.2405.12514

Project: www.media.mit.edu/projects/future-you/overview/

Journal information:
arXiv


© 2024 Science X Network

Citation:
Future-self chatbot gives users a glimpse of the life ahead of them (2024, June 5)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-future-chatbot-users-glimpse-life.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Apple partners with OpenAI as it unveils ‘Apple Intelligence’

0
Apple partners with OpenAI as it unveils 'Apple Intelligence'


Tim Cook, Apple chief executive officer, speaks during Apple's annual Worldwide Developers Conference in Cupertino, California
Tim Cook, Apple chief executive officer, speaks during Apple’s annual Worldwide Developers Conference in Cupertino, California.

Apple on Monday unveiled “Apple Intelligence,” its suite of new AI features for its coveted devices—and a partnership with OpenAI—as it seeks to catch up to rivals racing ahead on adopting the white-hot technology.

For months, pressure has been on Apple to persuade doubters on its AI strategy, after Microsoft and Google rolled out products in rapid-fire succession.

But this latest move will take the experience of Apple products “to new heights,” chief executive Tim Cook said as he opened an annual Worldwide Developers Conference at the tech giant’s headquarters in the Silicon Valley city of Cupertino, California.

To help towards that end, Apple has partnered with OpenAI, which ushered in a new era for generative artificial intelligence in 2022 with the arrival of ChatGPT.

OpenAI was “very happy to be partnering with Apple to integrate ChatGPT into their devices later this year! Think you will really like it,” posted the company’s chief Sam Altman on social media.

Apple Intelligence will also be added to a new version of the iOS 18 operating system, similarly unveiled Monday at the week-long conference.

Apple executives stressed privacy safeguards have been built into Apple Intelligence to make its Siri digital assistant and other products smarter, without pilfering user data.

The big challenge for Apple has been how to infuse ChatGPT-style AI—which voraciously feeds off data—into its products without weakening its heavily promoted user privacy and security, according to analysts.

The system “puts powerful generative models right at the core of your iPhone, iPad and Mac,” said Apple senior vice president of software engineering Craig Federighi.

“It draws on your personal context to give you intelligence that’s most helpful and relevant for you, and it protects your privacy at every step.”

But Tesla and SpaceX tycoon Elon Musk lashed out at the partnership, saying the threat to data security will make him ban iPhones at his companies.

“Apple has no clue what’s actually going on once they hand your data over to OpenAI. They’re selling you down the river,” Musk said in a post on social media.

Musk is building his own rival to OpenAI, xAI, and is suing the company that he helped found in 2015.

Sam Altman, chief executive officer of OpenAI, attends Apple's annual Worldwide Developers Conference (WDC) in Cupertino, California
Sam Altman, chief executive officer of OpenAI, attends Apple’s annual Worldwide Developers Conference (WDC) in Cupertino, California.

Apple Intelligence, which runs only on the company’s in-house technology, will enable users to create their own emojis based on a description in everyday language, or to generate brief summaries of e-mails in the mailbox.

Apple said Siri, its voice assistant, will also get an AI infused upgrade and now will appear as a pulsating light on the edge of your home screen.

Launched over 12 years ago, Siri has long since been seen as a dated feature, overtaken by the new generation of assistants, such as GPT-4o, OpenAI’s latest offering.

GPT-4o grabbed the headlines last month when actress Scarlett Johansson accused OpenAI of copying her voice to embody the assistant after she turned down an offer to work with the company.

OpenAI has denied this, but suspended the use of the new voice in its products.

ChatGPT on offer

In its deal with OpenAI, users can choose to enhance Siri on certain requests with ChatGPT, Federighi said.

“It sounds like it’s Apple—then if it needs ChatGPT, it offers it to you,” Techsponential analyst Avi Greengart said.

“The implementation is what is special here.”

The partnership with OpenAI was not exclusive, unlike Apple’s landmark tie-up with Google for search, which has drawn the scrutiny of antitrust regulators.

Apple said it expected to announce support for other AI models in the future.

The company founded by Steve Jobs had remained very quiet on AI since the start of the ChatGPT-sparked frenzy, with Apple for a while avoiding the term altogether.

But the pressure became too great, with Wall Street propelling Microsoft past Apple as the world’s biggest company when measured by stock price, largely because of the Windows-maker’s unabashed embrace of AI.

Wall Street investors were not overly impressed by the AI announcements, with Apple’s share price down nearly two percent at the close on Monday.

© 2024 AFP

Citation:
Apple partners with OpenAI as it unveils ‘Apple Intelligence’ (2024, June 10)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-apple-partners-openai-unveils-intelligence.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Incorporating ‘touch’ into social media interactions can increase feelings of support and approval, study suggests

0
Incorporating 'touch' into social media interactions can increase feelings of support and approval, study suggests


Incorporating "touch" into social media interactions can increase feelings of support and approval
The S-CAT and FaceJournal interface. (A) Shows the inside materials of the S-CAT. This is the side that contacts with the skin, and the pneumatic actuators covered by the silicon layer on the outside (as shown in the image) were positioned on the participant’s forearm, as it is where touch was being delivered from. (B) Shows a participant during the session, wearing the S-CAT on their left forearm, and looking at their post on the FaceJournal platform, while receiving visuotactile feedback via the S-CAT and the visual emoticon on the screen. The participant is sitting behind the makeshift wall and cannot see the experimenter sitting behind it, delivering the tactile feedback. Credit: Saramandi et al., 2024, PLOS ONE, CC-BY 4.0 (creativecommons.org/licenses/by/4.0/)

Including “tactile emoticons” into social media communications can enhance communication, according to a study published June 12, 2024 in the open-access journal PLOS ONE by Alkistis Saramandi and Yee Ki Au from University College London, United Kingdom, and colleagues.

Digital communications rely exclusively on visual and auditory cues (text, emoticons, videos, and music) to convey tone and emotion. Currently lacking from these platforms is touch, which can convey feelings of love and support, impact emotions, and influence behaviors.

Technology companies are developing devices to incorporate touch into digital interactions, such as interactive kiss and hug transmission devices. These social touch devices can elicit feelings of closeness and positive emotions, but the effect of touch in social media communication is not well studied.

In this study, researchers incorporated tactile emoticons into social media interactions. Using a mock social media platform, participants were given posts to send that expressed either positive or negative emotions. They then received feedback via visual emoticons (e.g., a heart or thumbs up), tactile emoticons (a stroke on the forearm by either another person or a robotic device), or both.

Participants felt greater feelings of support and approval when they received the tactile emoticons compared to the visual-only feedback. This suggests that social touch, even by a stranger, can convey meaning without any other accompanying communication. Feedback consisting of both visual and tactile emoticons was preferred over either type of emoticon alone.

The researchers noted that touch could offer additional context to visual emoticons, which can be misinterpreted. They also found that the type of touch matters. Touch delivered at a speed that activates the C-Tactile nervous system, which produces positive emotions associated with touching or embracing, was experienced more favorably than other types of touch.

According to the authors, this is the first study to explore the role of touch in communicating emotions via social media. They hope that the results can inform the development of devices to deliver touch during digital communications.

The authors add, “Touch has long been essential to human bonding and survival, yet in our digitized world we are more isolated and touch-deprived than ever. What if we could use ‘digitalized touch‘ to bring us closer in communicating emotions in today’s world?”

More information:
Tactile emoticons: Conveying social emotions and intentions with manual and robotic tactile feedback during social media communications, PLoS ONE (2024). DOI: 10.1371/journal.pone.0304417

Citation:
Incorporating ‘touch’ into social media interactions can increase feelings of support and approval, study suggests (2024, June 12)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-incorporating-social-media-interactions.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Correcting biases in image generator models

0
Correcting biases in image generator models


Correcting biases in image generator models
Editing a model based on a source and destination prompt. The edit generalizes to related prompts (green), leaving unrelated ones unaffected (gray). Credit: Hadas Orgad et al

Image generator models—systems that produce new images based on textual descriptions—have become a common and well-known phenomenon in the past year. Their continuous improvement, largely relying on developments in the field of artificial intelligence, makes them an important resource in various fields.

To achieve good results, these models are trained on vast amounts of image-text pairs—for example, matching the text “picture of a dog” to a picture of a dog, repeated millions of times. Through this training, the model learns to generate original images of dogs.

However, as noted by Hadas Orgad, a doctoral student from the Henry and Marilyn Taub Faculty of Computer Science, and Bahjat Kawar a graduate of the same Faculty, “since these models are trained on a lot of data from the real world, they acquire and internalize assumptions about the world during the training process.

“Some of these assumptions are useful, for example, ‘the sky is blue,’ and they allow us to obtain beautiful images even with short and simple descriptions. On the other hand, the model also encodes incorrect or irrelevant assumptions about the world, as well as societal biases. For example, if we ask Stable Diffusion (a very popular image generator) for a picture of a CEO, we will only get pictures of women in 4% of cases.”

Another problem these models face is the significant number of changes occurring in the world around us. The models cannot adapt to the changes after the training process.

As Dana Arad, also a doctoral student at the Taub Faculty of Computer Science, explains, “during their training process, models also learn a lot of factual knowledge about the world. For example, models learn the identities of heads of state, presidents, and even actors who portrayed popular characters in TV series.

“Such models are no longer updated after their training process, so if we ask a model today to generate a picture of the President of the United States, we might still reasonably receive a picture of Donald Trump, who of course has not been the president in recent years. We wanted to develop an efficient way to update the information without relying on expensive actions.”

The “traditional” solution to these problems is constant data correction by the user, retraining, or fine-tuning. However, these fixes incur high costs financially, in terms of workload, in terms of result quality, and in environmental aspects (due to the longer operation of computer servers). Additionally, implementing these methods does not guarantee control over unwanted assumptions or new assumptions that may arise. “Therefore,” they explain, “we would like a precise method to control the assumptions that the model encodes.”

The methods developed by the doctoral students under the guidance of Dr. Yonatan Belinkov address this need. The first method, developed by Orgad and Kawar and called TIME (Text-to-Image Model Editing), allows for the quick and efficient correction of biases and assumptions.

The reason for this is that the correction does not require fine-tuning, retraining, or changing the language model and altering the text interpretation tools, but only a partial re-editing of around 1.95% of the model’s parameters. Moreover, the same editing process is performed in less than a second.

In ongoing research based on TIME, called UCE, which has been developed in collaboration with Northeastern and MIT universities, they proposed a way to control a variety of undesirable ethical behaviors of the model—such as copyright infringement or social biases—by removing unwanted associations from the model such as offensive content or artistic styles of different artists.

Another method, developed subsequently by Arad and Orgad, is called ReFACT. It offers a different algorithm for parameter editing and achieves more precise results.

ReFACT edits an even smaller percentage of the model’s parameters—only 0.25%—and manages to perform a wider variety of edits, even in cases where previous methods failed. It does so while maintaining the quality of the images and the facts and assumptions of the model that we want to preserve.

The methods receive inputs from the user regarding a fact or assumption they want to edit. For example, in cases of implicit assumptions, the method receives a “source” on which the model bases implicit assumptions (e.g., “red roses” by default the model assumes red roses) and a “target” that describes the same circumstances but with the desired features (e.g., “blue roses”).

When wanting to use the method for role editing, the method receives an editing request (e.g., “President of the United States”) and then a “source” and “target” (“Donald Trump” and “Joe Biden,” respectively). The researchers collected about 200 works and assumptions on which they tested the editing methods and showed that these are efficient methods for updating information and correcting biases.

TIME was presented in October 2023 at the ICCV conference, a conference in the field of computer vision and machine learning. UCE was recently presented at the WACV conference.

ReFACT was presented in Mexico at the NAACL conference, a conference in natural language processing research.

More information:
Editing Implicit Assumptions in Text-to-Image Diffusion Models

ReFACT: Updating Text-to-Image Models by Editing the Text Encoder

Citation:
Correcting biases in image generator models (2024, June 24)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-biases-image-generator.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Model combines physical parameters and machine learning to predict storm tides

0
Model combines physical parameters and machine learning to predict storm tides


Model combines physical parameters and machine learning to predict storm tides
The study combined physical and numerical models, working with data in different formats via a multimodal architecture. Credit: Tânia Rego/Agência Brasil

Predicting extreme events is essential to the preparation and protection of vulnerable regions, especially at a time of climate change. The city of Santos on the coast of São Paulo state (Brazil) is Latin America’s largest port and has been the focus for significant case studies, not least because of the storm surges that threaten its infrastructure and the local ecosystems.

An article reporting the results of a study that focused on a critical part of Santos and used advanced machine learning tools to optimize existing extreme event prediction systems has been published in Proceedings of the AAAI Conference on Artificial Intelligence.

It mobilized a large number of researchers and was coordinated by Anna Helena Reali Costa, full professor at the University of São Paulo’s Engineering School (POLI-USP). The first author is Marcel Barros, a researcher in POLI-USP’s Department of Computer Engineering and Digital Systems.

The models used to predict sea surface heights, high tides, wave heights and so on are based on differential equations comprising temporal and spatial information such as astronomic tide (determined by the relative positions of the sun, moon and Earth), wind regime, current velocity and salinity, among many others.

These models are successful in several areas but they are complex and depend on a number of simplifications and hypotheses. Moreover, new measurements and other data sources cannot always be integrated into them to make forecasts more reliable.

Although modelers are increasingly using machine learning methods capable of identifying patterns in data and extrapolating to new situations, a great many examples are required to train the algorithms that perform complex tasks such as those involved in weather forecasting and storm tide prediction.

“Our study combined the two worlds to develop a model based on machine learning that uses physical models as a starting point but refines them by adding measured data. This research field is known as physics-informed machine learning, or PIML,” Barros explained.

Harmonization of these two sources of information is fundamental to develop more precise and accurate forecasts. However, the use of sensor data faces significant technical challenges, owing especially to its irregular nature and problems such as missing data, temporal displacements, and variations in sampling frequencies. Sensors that fail can take days to be brought back online, but the mechanisms for predicting storm tides must be capable of operating continuously without the missing data.

“To address situations with highly irregular data, we developed an innovative technique to represent the passing of time in neural networks. This representation lets the model be told the position and size of the missing data windows, so that it considers them in its predictions of tide and wave heights,” Barros said.

The innovation permits better modeling of complex natural phenomena and can also be used to model other phenomena that involve irregular time series, such as health data, sensor networks in manufacturing, or financial indicators.

“Furthermore, our model combines different kinds of neural networks so as to integrate multimodal data, such as satellite images, tables and forecasts from numerical models, with possible future integration of other types of data, such as text and audio. This approach is an important step toward more robust and adaptable forecasting systems that can handle the complexity and variability of the data associated with extreme weather events,” Reali Costa said.

The model has three key virtues, she added, it combines physical and numerical models; it represents time in neural networks in a new way; and it works with data in different formats by means of multimodal architecture.

“The study offers a methodology that can improve the accuracy of predictions of extreme events, such as storm tides in Santos. At the same time, it highlights the challenges and potential solutions for the integration of physical models and sensor data in complex contexts,” she said.

More information:
Marcel Barros et al, Early Detection of Extreme Storm Tide Events Using Multimodal Data Processing, Proceedings of the AAAI Conference on Artificial Intelligence (2024). DOI: 10.1609/aaai.v38i20.30194

Citation:
Model combines physical parameters and machine learning to predict storm tides (2024, June 21)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-combines-physical-parameters-machine-storm.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link