Tuesday, January 7, 2025
Home Blog Page 1470

The limits of ChatGPT for scriptwriting

0
The limits of ChatGPT for scriptwriting


Censoring creativity: The limits of ChatGPT for scriptwriting
This diagram shows the process by which the researchers audited ChatGPT, using the first episode of Game of Thrones as an example. Credit: Yaaseen Mahomed, Charlie M. Crawford, Sanjana Gautam, Sorelle A. Friedler, Danaƫ Metaxa

Last year, the Writers Guild of America (WGA) labor union, which represents film and TV writers, went on strike for nearly five months, in part to regulate AI’s role in scriptwriting. “Alexa will not replace us,” read one picket sign.

Now, researchers at Penn Engineering, Haverford College, and Penn State have presented a paper at the 2024 Association of Computing Machinery Conference on Fairness, Accountability and Transparency (ACM FAccT) that identifies a previously unreported drawback to writing scripts using OpenAI’s ChatGPT: content moderation so overzealous that even some PG-rated scripts are censored, potentially limiting artistic expression.

The research is published in The 2024 ACM Conference on Fairness, Accountability, and Transparency.

The guidelines established by the agreement between the WGA and the Association of Motion Picture and Television Producers (AMPTP) that ended the strike permitted certain uses of AI in scriptwriting. While both the WGA and AMPTP agreed that AI cannot be credited as a writer, they allowed the use of AI as a tool in the creative process.

The new study raises questions about the efficacy of this approach, showing that automated content moderation restricts ChatGPT from producing content that has already been permitted on television. ChatGPT’s automated content moderation filters for topics including violence, sexuality and hate speech to prevent the generation of inappropriate or dangerous content.

In the study, which examined both real and ChatGPT-generated scripts for IMDb’s 100 most-watched television shows, including Game of Thrones, Stranger Things and 13 Reasons Why, ChatGPT flagged nearly 20% of scripts that ChatGPT itself generated for content violations, and nearly 70% of actual scripts from the TV shows on the list, including half of tested PG-rated shows.

“If AI is used to generate cultural content, such as TV scripts, what stories won’t be told?” write the paper’s co-senior authors, DanaĆ« Metaxa, Raj and Neera Singh Assistant Professor in Computer and Information Science (CIS) at Penn Engineering, and Sorelle Friedler, Shibulal Family Computer Science Professor at Haverford College.

“We tested real scripts,” says Friedler, “and 69% of them wouldn’t make it through the content filters, including even some of the PG-rated ones. That really struck me as indicative of the system being a little overager to filter out content.”

Censoring creativity: The limits of ChatGPT for scriptwriting
Researchers found that even shows rated TV-PG were flagged by ChatGPT for content violations. Credit: University of Pennsylvania

Prompted by the writers’ strike, the project began with Friedler and Metaxa wondering if a large language model (LLM) like ChatGPT could actually produce a high-quality script. “We started trying to produce scripts with LLMs,” recalls Metaxa, “and we found that before we could even get to the question of whether the script is high quality, in many cases we were not able to get the LLM to generate a script at all.”

In one instance, given a prompt drawn from a summary of an episode of Game of Thrones, ChatGPT declined to produce the script and responded with a red warning, “This content may violate our usage policies.”

To study ChatGPT’s content moderation system, the researchers employed a technique known as an “algorithm audit,” which draws conclusions about software whose internal workings remain proprietary by analyzing the software’s outputs.

The team, which also included first author Yaaseen Mahomed, a recent master’s graduate in CIS at Penn Engineering, Charlie M. Crawford, an undergraduate at Haverford, and Sanjana Gautam, a Ph.D. student in Informatics at Penn State, repeatedly queried ChatGPT, asking it to write scripts based on summaries of TV show episodes pulled from the Internet Movie Database (IMDb) and Wikipedia.

For each script request, the team probed ChatGPT’s “content moderation endpoint,” a tool accessible to programmers that returns a list of 11 categories of prohibited content (including “hate,” “sexual” and “self-harm“) and indicates which categories, if any, were triggered by the prompt, as well as a score between 0 and 1 of ChatGPT’s confidence in its assessment of a violation for each category.

In effect, this approach allowed the team to determine why certain script-writing requests were censored, and to deduce the sensitivity of ChatGPT’s content moderation settings to particular topics, genres and age ratings.

As the paper’s authors acknowledge, content moderation is an essential part of LLMs, since removing inappropriate content from the models’ training data is extremely difficult. “If you don’t bake in some form of content moderation,” says Friedler, “then these models will spew violent and racist language at you.”

Still, as the researchers found, overzealous content moderation can easily tip into censorship and limit artistic expression. Aggregating over 250,000 outputs from the content moderation endpoint allowed the researchers to observe patterns in ChatGPT’s choice to permit (or not permit) itself to write certain scripts.

Censoring creativity: The limits of ChatGPT for scriptwriting
Certain categories were flagged for content violations more than others; real scripts had the highest rates of content violations. Credit: University of Pennsylvania

Among the researchers’ most notable findings is that different categories of potentially harmful content flag at different rates. The researchers found that scripts were very frequently flagged for violent content, driving many of the other findings, such as a high likelihood of flagging for crime and horror shows. Real scripts had high relative scores for sexual content, while GPT-generated scripts were less likely to generate content deemed inappropriately sexual in the first place.

In many cases, content seen as appropriate for TV viewersā€”and watched by millions of fansā€”was still identified as a content violation by Open AI.

TV scripts that mention self-harm, for instance, could be dangerous, or a form of artistic expression. “We need to be talking about topics like self-harm,” says Metaxa, “but with a level of care and nuance, and it’s just not in the interest of a company producing this kind of tool to put in the enormous amount of effort that it would require to walk that line carefully.”

One aspect of ChatGPT that the researchers hope to explore further is the extent to which the software’s content moderation settings filter out content related to marginalized identities. As Friedler puts it, “This type of filtering may filter out some voices and some representations of human life more than others.”

Indeed, the researchers found that ChatGPT was more likely to flag scripts describing female nudity as improperly sexual than scripts describing male nudity, and that ChatGPT was more likely to rate scripts that included descriptions of disabilities and mental illness as violent, although the researchers say that both trends need to be further investigated.

“Ironically,” says Metaxa, “the groups that are likely to be hurt by hate speech that might spew from an LLM without guardrails are the same groups that are going to be hurt by over-moderation that restricts an LLM from speaking about certain types of marginalized identities.”

In the context of the recent strike, the researchers affirm the necessity of both content moderation and artistic expression, neither of which they believe should be left entirely in the hands of autonomous systems. “Content moderation is far from a solved problem and undeniably important,” the researchers conclude. “But the solution to these issues must not be censorship.”

This study was conducted at the University of Pennsylvania School of Engineering and Applied Science, Haverford College and The Pennsylvania State University.

More information:
Yaaseen Mahomed et al, Auditing GPT’s Content Moderation Guardrails: Can ChatGPT Write Your Favorite TV Show?, The 2024 ACM Conference on Fairness, Accountability, and Transparency (2024). DOI: 10.1145/3630106.3658932

Citation:
Censoring creativity: The limits of ChatGPT for scriptwriting (2024, June 12)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-censoring-creativity-limits-chatgpt-scriptwriting.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

A novel elderly care robot could soon provide personal assistance, enhancing seniors’ quality of life

0
A novel elderly care robot could soon provide personal assistance, enhancing seniors' quality of life


A novel elderly care robot could soon provide personal assistance, enhancing seniors' quality of life
General scheme of ADAM elements from back and front view. Credit: Frontiers in Neurorobotics (2024). DOI: 10.3389/fnbot.2024.1337608

Worldwide, humans are living longer than ever before. According to data from the United Nations, approximately 13.5% of the world’s people were at least 60 years old in 2020, and by some estimates, that figure could increase to nearly 22% by 2050.

Advanced age can bring cognitive and/or physical difficulties, and with more and more elderly individuals potentially needing assistance to manage such challenges, advances in technology may provide the necessary help.

One of the newest innovations comes from a collaboration between researchers at Spain’s Universidad Carlos III and the manufacturer Robotnik. The team has developed the Autonomous Domestic Ambidextrous Manipulator (ADAM), an elderly care robot that can assist people with basic daily functions. The team reports on its work in Frontiers in Neurorobotics.

ADAM, an indoor mobile robot that stands upright, features a vision system and two arms with grippers. It can adapt to homes of different sizes for safe and optimal performance. It respects users’ personal space while helping with domestic tasks and learning from its experiences via an imitation learning method.

On a practical level, ADAM can pass through doors and perform everyday tasks such as sweeping a floor, moving objects and furniture as needed, setting a table, pouring water, preparing a simple meal, and bringing items to a user upon request.






Credit: Gonzalo Espinoza / Universidad Carlos III Robotics Lab

In their review of existing developments in this arena, the researchers describe several robots that have been recently developed and adapted to assist elderly individuals with both cognitive tasks (such as memory training and games to help alleviate dementia symptoms) and physical tasks (such as detection of falls by users, followed by notification actions; monitoring and assisting users with managing usage of home automation systems; and providing assistance such as retrieving items from the floor and storing items in user-inaccessible areas in the home).

Against this backdrop, the team behind this new work aimed to design a robot with unique features to assist users with physical tasks in their own homes.

Next-level personal care through modular design and a learning platform

Several features set ADAM apart from existing personal care robots. The first is its modular design, which includes a base, cameras, arms and hands providing multiple sensory inputs. Each of these units can work independently or cooperatively at a high or low level. Importantly, this means that the robot can support research while meeting users’ personal care needs.

In addition, ADAM’s arms themselves are collaborative, allowing for user operation, and can move according to the parameters of the immediate environment. Moreover, as a basic safety feature of the robot’s design, it continuously considers the people present in the environment in order to avoid collisions while providing personal care.

A novel elderly care robot could soon provide personal assistance, enhancing seniors' quality of life
Visual description of the ADAM service robotic platform and its four main capabilities for the development of elderly care tasks: perception of the environment, navigation and environmental comprehension, social navigation and manipulation learning. Credit: Frontiers in Neurorobotics (2024). DOI: 10.3389/fnbot.2024.1337608

Technical aspects

ADAM stands 160 cm tallā€”about the height of a petite human adult. Its arms, whose maximum load capacity is 3 kg, extend to a width of 50 cm. The researchers point out that they designed the robot to “simulate the structure of a human torso and arms. This is because a human-like structure allows it to work more comfortably in domestic environments because the rooms, doors, and furniture are adapted to humans.”

Batteries in ADAM’s base power its movements, cameras, and 3D LiDAR sensors. With all systems running, the robot’s minimum battery life is just under four hours, and battery charging takes a little over two hours. It can rotate in place and move forward and backward, but not laterally.

ADAM includes two internally connected computersā€”one for the base and the other for the armsā€”and a WiFi module for external communication. An RGBD camera and 2D LiDAR help to control basic forward movement, complemented by additional RGBD and LiDAR sensors positioned higher in the unit that expand its perception angle and range.

A novel elderly care robot could soon provide personal assistance, enhancing seniors' quality of life
Visualization of the ADAM model in simulation, where the reference systems of the base and arms can be seen. The reference frame transformations between them are schematically represented. Credit: Frontiers in Neurorobotics (2024). DOI: 10.3389/fnbot.2024.1337608

The additional RGBD sensor is a Realsense D435 depth camera that includes an RGB module and infrared stereo vision, while the additional LiDAR sensor provides 3D spatial details that work with a geometric mapping algorithm to map the entirety of objects in the environment.

The approximate range of motion of ADAM’s arms is 360o, and a parallel gripper system (the “Duck Gripper”) comprises its hands. Within this system is an independent power supply and a Raspberry Pi Zero 2 W board that communicates via WiFi to a corresponding robot operating system (ROS) node. Force-sensing resistors (FSRs) on each gripper jaw help the hands grasp and pick up objects with appropriate amounts of force.

Acing an early test involving collaboration

The researchers report that they have successfully tested ADAM as part of the Heterogeneous Intelligent Multi-Robot Team for Assistance of Elderly People (HIMTAE) project. Collaborating with researchers from Spain’s University of Cartagena and Sweden’s University of Ɩrebro, they presented ADAM as an integral part of a team including multiple robots and home automation systems.

A novel elderly care robot could soon provide personal assistance, enhancing seniors' quality of life
Information captured by the perception system. The main sources of information are the RGB image and the corresponding depth values from the RGBD sensor and the 3D spatial information from the LiDAR sensor, which covers a full room. Credit: Frontiers in Neurorobotics (2024). DOI: 10.3389/fnbot.2024.1337608

Within the test, another robot (“Robwell”) had established an “empathetic relationship” with users, who wore bracelets to monitor their mental and physical states and communicate them to Robwell.

Roswell, in turn, would remind the users to drink water when needed and communicate with both the home automation system and ADAM regarding specific user needs. ADAM’s role was to perform tasks within the kitchen, preparing and delivering food or water to Robwell, which would then provide it to the users.

The users who participated in the test returned an average value of 93% satisfaction with its outcome. The researchers note that employing two robots was effective; Robwell could monitor and engage with users while ADAM worked in the kitchen. Users were also able to enter the kitchen and interact with ADAM while it performed tasks, and ADAM could likewise interact with users while they performed tasks.

  • A novel elderly care robot could soon provide personal assistance, enhancing seniors' quality of life
    Duck Gripper final design with an exploded view of the gripper and its main components. Credit: Frontiers in Neurorobotics (2024). DOI: 10.3389/fnbot.2024.1337608
  • A novel elderly care robot could soon provide personal assistance, enhancing seniors' quality of life
    Duck Gripper performance test movements. From left to right, and from top to bottom. The gripper grabs the object on the workstation. The gripper displaces away from the robot. The end effector rotates 90Ā° clockwise. The gripper displaces toward the robot. The end effector rotates 90Ā° counterclockwise. The gripper opens to release the object. Credit: Frontiers in Neurorobotics (2024). DOI: 10.3389/fnbot.2024.1337608

What’s needed next?

As the HIMTAE test results were obtained within a controlled laboratory environment, the team cautions that future tests must take place in authentic domestic environments to determine user satisfaction with ADAM’s performance.

Looking ahead, the researchers observe, “The perception system is fixed, so in certain situations, ADAM will not be able to detect specific parts of the environment. The bimanipulation capabilities of ADAM are not fully developed, and the arms configuration is not optimized.” In addition to focusing on improvements in these areas, they write that “new task and motion planning strategies will be implemented to deal with more complex home tasks, which will make ADAM a much more complete robot companion for elderly care.”

More information:
Alicia Mora et al, ADAM: a robotic companion for enhanced quality of life in aging populations, Frontiers in Neurorobotics (2024). DOI: 10.3389/fnbot.2024.1337608

Ā© 2024 Science X Network

Citation:
A novel elderly care robot could soon provide personal assistance, enhancing seniors’ quality of life (2024, February 19)
retrieved 24 June 2024
from https://techxplore.com/news/2024-02-elderly-robot-personal-seniors-quality.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Car dealers across US are crippled by a second cyberattack

0
Car dealers across US are crippled by a second cyberattack


car dealership
Credit: Unsplash/CC0 Public Domain

Auto retailers across the U.S. suffered a second major disruption in as many days due to another cyberattack at CDK Global, the software provider on which thousands of dealers rely to run their stores.

CDK informed customers on Thursday of the incident that had occurred late the prior evening. The company shut down most of its systems again, saying in a recorded update that it doesn’t have an estimate for how long it will take to restore services.

“Our dealers’ systems will not be available at a minimum on Thursday,” the company said.

On what otherwise would have been a busy U.S. holiday for business, dealers reliant on CDK were unable to use its systems to complete transactions, access customer records, schedule appointments or handle car-repair orders. The company serves almost 15,000 dealerships, supporting front-office salespeople, back-office support staff and parts-and-service shops.

AutoNation Inc. led shares of publicly listed dealership groups lower Thursday, falling as much as 4.6% in intraday trading. Lithia Motors Inc., Group 1 Automotive Inc. and Sonic Automotive Inc. also slumped.

Greg Thornton, the general manager of a dealership group in Frederick, Maryland, said his stores’ CDK customer-relations software had been down since early Wednesday morning.

“I can only assume that CDK is working all hands on deck to resolve this,” said Thornton, whose group includes Audi and Volvo stores. “We’ve had no conversations with them in person or over the phone.”

Sam Pack’s Five Star Chevrolet outside Dallas sold four vehicles on Wednesday despite the initial outage, but has had to adapt, such as by handling some tasks on paper until service is restored, said Alan Brown, the store’s general manager. While sales staff are able to submit approvals to lenders, the outage has blocked other elements of a transaction, such as obtaining titles.

“We’re still doing business,” Brown said. “It’s just not our normal flow.”

The CDK provider hasn’t yet provided a timeline for when its systems will be available again, he said.

The National Automobile Dealers Association said Wednesday it was actively seeking information from CDK to determine the nature and scope of the cyber-incident.

CDK was spun off by Automatic Data Processing Inc. in 2014, then agreed to be acquired in April 2022 by the investment company Brookfield Business Partners in an all-cash deal valued at $6.4 billion.

2024 Bloomberg L.P. Distributed by Tribune Content Agency, LLC.

Citation:
Car dealers across US are crippled by a second cyberattack (2024, June 20)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-car-dealers-crippled-cyberattack.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

New electronic skin mimics human touch with 3D architecture

0
New electronic skin mimics human touch with 3D architecture


3D architected electronic skin mimicking human mechanosensation
(A) Bio-inspired design of the 3D architected electronic skin (3DAE-skin). (B) 3DAE-skin attached to the finger tip of a robot hand. (C-G) Optical and microscope images of the 3DAE-skin. Credit: Science (2024). DOI: 10.1126/science.adk5556

Created by nature, the human skin shows powerful sensing capabilities that have been pursued by scientists for a very long time. However, it is challenging for today’s technologies to replicate the spatial arrangement of the complex 3D microstructure of human skin.

A research team led by Professor Yihui Zhang from Tsinghua University has developed a three-dimensionally architected electronic skin that mimics human mechanosensation for fully-decoupled sensing of normal force, shear force and strain.

Their findings were published in Science.

Taught by nature

Inspired by human skin, they created a three-dimensionally architected electronic skin with force and strain sensing components arranged in a 3D layout that mimics that of Merkel cells and Ruffini endings in human skin.

This 3DAE-Skin shows excellent decoupled sensing performances of normal force, shear force, and strain. It is the first-of-its-kind with force and strain sensing components arranged in a 3D layout that mimics that of slowly adapting mechanoreceptors in human skin.

Enchanted by artificial intelligence

With the assistance of artificial intelligence, they developed a tactile system for simultaneous modulus/curvature measurements of an object through touch. Demonstrations include rapid modulus measurements of fruits, bread, and cake with various shapes and degrees of freshness.

3D architected electronic skin mimicking human mechanosensation
Credit: Tsinghua University

The resulting technology provides rapid measurement capabilities of the friction coefficient and the modulus of an object with diverse shapes, with potential applications in freshness assessment, biomedical diagnosis, humanoid robots, prosthetic systems, among others.

Zhang’s study was done with colleagues from Tsinghua University’s Applied Mechanics Laboratory, Department of Engineering Mechanics and Laboratory of Flexible Electronics Technology.

More information:
Zhi Liu et al, A three-dimensionally architected electronic skin mimicking human mechanosensation, Science (2024). DOI: 10.1126/science.adk5556

Citation:
New electronic skin mimics human touch with 3D architecture (2024, June 4)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-electronic-skin-mimics-human-3d.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Using AI to decode dog vocalizations

0
Using AI to decode dog vocalizations


Using AI to decode dog vocalizations
An AI tool developed at the University of Michigan can tell playful barks from aggressive onesā€”as well as identifying the dogā€™s age, sex and breed. Credit: Marcin Szczepanski/Michigan Engineering.

Have you ever wished you could understand what your dog is trying to say to you? University of Michigan researchers are exploring the possibilities of AI, developing tools that can identify whether a dog’s bark conveys playfulness or aggression.

The same models can also glean other information from animal vocalizations, such as the animal’s age, breed and sex. A collaboration with Mexico’s National Institute of Astrophysics, Optics and Electronics (INAOE) Institute in Puebla, the study finds that AI models originally trained on human speech can be used as a starting point to train new systems that target animal communication.

The results were presented at the Joint International Conference on Computational Linguistics, Language Resources and Evaluation. The study is published on the arXiv preprint server.

“By using speech processing models initially trained on human speech, our research opens a new window into how we can leverage what we built so far in speech processing to start understanding the nuances of dog barks,” said Rada Mihalcea, the Janice M. Jenkins Collegiate Professor of Computer Science and Engineering, and director of U-M’s AI Laboratory.

“There is so much we don’t yet know about the animals that share this world with us. Advances in AI can be used to revolutionize our understanding of animal communication, and our findings suggest that we may not have to start from scratch.”

One of the prevailing obstacles to developing AI models that can analyze animal vocalizations is the lack of publicly available data. While there are numerous resources and opportunities for recording human speech, collecting such data from animals is more difficult.

“Animal vocalizations are logistically much harder to solicit and record,” said Artem Abzaliev, lead author and U-M doctoral student in computer science and engineering. “They must be passively recorded in the wild or, in the case of domestic pets, with the permission of owners.”

Using AI to decode dog vocalizations
Artem Abzaliev and his dog, Nova, in Nuremberg, Germany. The AI software he developed with Rada Mihalcea and Humberto PĆ©rez-Espinosa can identify whether a dogā€™s bark is playful or aggressive as well as identifying breed, sex and age. Credit: Abzaliev

Because of this dearth of usable data, techniques for analyzing dog vocalizations have proven difficult to develop, and the ones that do exist are limited by a lack of training material. The researchers overcame these challenges by repurposing an existing model that was originally designed to analyze human speech.

This approach enabled the researchers to tap into robust models that form the backbone of the various voice-enabled technologies we use today, including voice-to-text and language translation. These models are trained to distinguish nuances in human speech, like tone, pitch and accent, and convert this information into a format that a computer can use to identify what words are being said, recognize the individual speaking, and more.

“These models are able to learn and encode the incredibly complex patterns of human language and speech,” Abzaliev said. “We wanted to see if we could leverage this ability to discern and interpret dog barks.”

The researchers used a dataset of dog vocalizations recorded from 74 dogs of varying breed, age and sex, in a variety of contexts. Humberto PĆ©rez-Espinosa, a collaborator at INAOE, led the team who collected the dataset. Abzaliev then used the recordings to modify a machine-learning modelā€”a type of computer algorithm that identifies patterns in large data sets. The team chose a speech representation model called Wav2Vec2, which was originally trained on human speech data.

With this model, the researchers were able to generate representations of the acoustic data collected from the dogs and interpret these representations. They found that Wav2Vec2 not only succeeded at four classification tasks; it also outperformed other models trained specifically on dog bark data, with accuracy figures up to 70%.

“This is the first time that techniques optimized for human speech have been built upon to help with the decoding of animal communication,” Mihalcea said. “Our results show that the sounds and patterns derived from human speech can serve as a foundation for analyzing and understanding the acoustic patterns of other sounds, such as animal vocalizations.”

In addition to establishing human speech models as a useful tool in analyzing animal communicationā€”which could benefit biologists, animal behaviorists and moreā€”this research has important implications for animal welfare. Understanding the nuances of dog vocalizations could greatly improve how humans interpret and respond to the emotional and physical needs of dogs, thereby enhancing their care and preventing potentially dangerous situations, the researchers said.

More information:
Artem Abzaliev et al, Towards Dog Bark Decoding: Leveraging Human Speech Processing for Automated Bark Classification, arXiv (2024). DOI: 10.48550/arxiv.2404.18739

Journal information:
arXiv


Citation:
Using AI to decode dog vocalizations (2024, June 4)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-ai-decode-dog-vocalizations.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link