Wednesday, January 1, 2025
Home Blog Page 1428

Car dealers across US are crippled by a second cyberattack

0
Car dealers across US are crippled by a second cyberattack


car dealership
Credit: Unsplash/CC0 Public Domain

Auto retailers across the U.S. suffered a second major disruption in as many days due to another cyberattack at CDK Global, the software provider on which thousands of dealers rely to run their stores.

CDK informed customers on Thursday of the incident that had occurred late the prior evening. The company shut down most of its systems again, saying in a recorded update that it doesn’t have an estimate for how long it will take to restore services.

“Our dealers’ systems will not be available at a minimum on Thursday,” the company said.

On what otherwise would have been a busy U.S. holiday for business, dealers reliant on CDK were unable to use its systems to complete transactions, access customer records, schedule appointments or handle car-repair orders. The company serves almost 15,000 dealerships, supporting front-office salespeople, back-office support staff and parts-and-service shops.

AutoNation Inc. led shares of publicly listed dealership groups lower Thursday, falling as much as 4.6% in intraday trading. Lithia Motors Inc., Group 1 Automotive Inc. and Sonic Automotive Inc. also slumped.

Greg Thornton, the general manager of a dealership group in Frederick, Maryland, said his stores’ CDK customer-relations software had been down since early Wednesday morning.

“I can only assume that CDK is working all hands on deck to resolve this,” said Thornton, whose group includes Audi and Volvo stores. “We’ve had no conversations with them in person or over the phone.”

Sam Pack’s Five Star Chevrolet outside Dallas sold four vehicles on Wednesday despite the initial outage, but has had to adapt, such as by handling some tasks on paper until service is restored, said Alan Brown, the store’s general manager. While sales staff are able to submit approvals to lenders, the outage has blocked other elements of a transaction, such as obtaining titles.

“We’re still doing business,” Brown said. “It’s just not our normal flow.”

The CDK provider hasn’t yet provided a timeline for when its systems will be available again, he said.

The National Automobile Dealers Association said Wednesday it was actively seeking information from CDK to determine the nature and scope of the cyber-incident.

CDK was spun off by Automatic Data Processing Inc. in 2014, then agreed to be acquired in April 2022 by the investment company Brookfield Business Partners in an all-cash deal valued at $6.4 billion.

2024 Bloomberg L.P. Distributed by Tribune Content Agency, LLC.

Citation:
Car dealers across US are crippled by a second cyberattack (2024, June 20)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-car-dealers-crippled-cyberattack.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

New electronic skin mimics human touch with 3D architecture

0
New electronic skin mimics human touch with 3D architecture


3D architected electronic skin mimicking human mechanosensation
(A) Bio-inspired design of the 3D architected electronic skin (3DAE-skin). (B) 3DAE-skin attached to the finger tip of a robot hand. (C-G) Optical and microscope images of the 3DAE-skin. Credit: Science (2024). DOI: 10.1126/science.adk5556

Created by nature, the human skin shows powerful sensing capabilities that have been pursued by scientists for a very long time. However, it is challenging for today’s technologies to replicate the spatial arrangement of the complex 3D microstructure of human skin.

A research team led by Professor Yihui Zhang from Tsinghua University has developed a three-dimensionally architected electronic skin that mimics human mechanosensation for fully-decoupled sensing of normal force, shear force and strain.

Their findings were published in Science.

Taught by nature

Inspired by human skin, they created a three-dimensionally architected electronic skin with force and strain sensing components arranged in a 3D layout that mimics that of Merkel cells and Ruffini endings in human skin.

This 3DAE-Skin shows excellent decoupled sensing performances of normal force, shear force, and strain. It is the first-of-its-kind with force and strain sensing components arranged in a 3D layout that mimics that of slowly adapting mechanoreceptors in human skin.

Enchanted by artificial intelligence

With the assistance of artificial intelligence, they developed a tactile system for simultaneous modulus/curvature measurements of an object through touch. Demonstrations include rapid modulus measurements of fruits, bread, and cake with various shapes and degrees of freshness.

3D architected electronic skin mimicking human mechanosensation
Credit: Tsinghua University

The resulting technology provides rapid measurement capabilities of the friction coefficient and the modulus of an object with diverse shapes, with potential applications in freshness assessment, biomedical diagnosis, humanoid robots, prosthetic systems, among others.

Zhang’s study was done with colleagues from Tsinghua University’s Applied Mechanics Laboratory, Department of Engineering Mechanics and Laboratory of Flexible Electronics Technology.

More information:
Zhi Liu et al, A three-dimensionally architected electronic skin mimicking human mechanosensation, Science (2024). DOI: 10.1126/science.adk5556

Citation:
New electronic skin mimics human touch with 3D architecture (2024, June 4)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-electronic-skin-mimics-human-3d.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Using AI to decode dog vocalizations

0
Using AI to decode dog vocalizations


Using AI to decode dog vocalizations
An AI tool developed at the University of Michigan can tell playful barks from aggressive ones—as well as identifying the dog’s age, sex and breed. Credit: Marcin Szczepanski/Michigan Engineering.

Have you ever wished you could understand what your dog is trying to say to you? University of Michigan researchers are exploring the possibilities of AI, developing tools that can identify whether a dog’s bark conveys playfulness or aggression.

The same models can also glean other information from animal vocalizations, such as the animal’s age, breed and sex. A collaboration with Mexico’s National Institute of Astrophysics, Optics and Electronics (INAOE) Institute in Puebla, the study finds that AI models originally trained on human speech can be used as a starting point to train new systems that target animal communication.

The results were presented at the Joint International Conference on Computational Linguistics, Language Resources and Evaluation. The study is published on the arXiv preprint server.

“By using speech processing models initially trained on human speech, our research opens a new window into how we can leverage what we built so far in speech processing to start understanding the nuances of dog barks,” said Rada Mihalcea, the Janice M. Jenkins Collegiate Professor of Computer Science and Engineering, and director of U-M’s AI Laboratory.

“There is so much we don’t yet know about the animals that share this world with us. Advances in AI can be used to revolutionize our understanding of animal communication, and our findings suggest that we may not have to start from scratch.”

One of the prevailing obstacles to developing AI models that can analyze animal vocalizations is the lack of publicly available data. While there are numerous resources and opportunities for recording human speech, collecting such data from animals is more difficult.

“Animal vocalizations are logistically much harder to solicit and record,” said Artem Abzaliev, lead author and U-M doctoral student in computer science and engineering. “They must be passively recorded in the wild or, in the case of domestic pets, with the permission of owners.”

Using AI to decode dog vocalizations
Artem Abzaliev and his dog, Nova, in Nuremberg, Germany. The AI software he developed with Rada Mihalcea and Humberto Pérez-Espinosa can identify whether a dog’s bark is playful or aggressive as well as identifying breed, sex and age. Credit: Abzaliev

Because of this dearth of usable data, techniques for analyzing dog vocalizations have proven difficult to develop, and the ones that do exist are limited by a lack of training material. The researchers overcame these challenges by repurposing an existing model that was originally designed to analyze human speech.

This approach enabled the researchers to tap into robust models that form the backbone of the various voice-enabled technologies we use today, including voice-to-text and language translation. These models are trained to distinguish nuances in human speech, like tone, pitch and accent, and convert this information into a format that a computer can use to identify what words are being said, recognize the individual speaking, and more.

“These models are able to learn and encode the incredibly complex patterns of human language and speech,” Abzaliev said. “We wanted to see if we could leverage this ability to discern and interpret dog barks.”

The researchers used a dataset of dog vocalizations recorded from 74 dogs of varying breed, age and sex, in a variety of contexts. Humberto Pérez-Espinosa, a collaborator at INAOE, led the team who collected the dataset. Abzaliev then used the recordings to modify a machine-learning model—a type of computer algorithm that identifies patterns in large data sets. The team chose a speech representation model called Wav2Vec2, which was originally trained on human speech data.

With this model, the researchers were able to generate representations of the acoustic data collected from the dogs and interpret these representations. They found that Wav2Vec2 not only succeeded at four classification tasks; it also outperformed other models trained specifically on dog bark data, with accuracy figures up to 70%.

“This is the first time that techniques optimized for human speech have been built upon to help with the decoding of animal communication,” Mihalcea said. “Our results show that the sounds and patterns derived from human speech can serve as a foundation for analyzing and understanding the acoustic patterns of other sounds, such as animal vocalizations.”

In addition to establishing human speech models as a useful tool in analyzing animal communication—which could benefit biologists, animal behaviorists and more—this research has important implications for animal welfare. Understanding the nuances of dog vocalizations could greatly improve how humans interpret and respond to the emotional and physical needs of dogs, thereby enhancing their care and preventing potentially dangerous situations, the researchers said.

More information:
Artem Abzaliev et al, Towards Dog Bark Decoding: Leveraging Human Speech Processing for Automated Bark Classification, arXiv (2024). DOI: 10.48550/arxiv.2404.18739

Journal information:
arXiv


Citation:
Using AI to decode dog vocalizations (2024, June 4)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-ai-decode-dog-vocalizations.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Study presents novel protocol structure for achieving finite-time consensus of multi-agent systems

0
Study presents novel protocol structure for achieving finite-time consensus of multi-agent systems


IEEE/CAA Journal of Automatica Sinica study presents novel protocol structure for achieving finite-time consensus of multi-agent systems
The new protocol structure ensures global and semi-global finite time consensus for both leaderless and leader-following multi-agent systems and allows the calculation of an upper-bound for settling time for the closed-loop system. Credit: Chinese Association of Automation

Consensus problems, where a group of agents, such as unmanned vehicles, machines, or robots, need to agree on certain variables only through local communication within themselves, have attracted considerable attention as a fundamental issue in cooperative control of multi-agent systems. Simply put, a multi-agent system comprises multiple decision-making agents that interact among themselves in a common environment to achieve common or conflicting goals depending on the situation.

Depending on whether agents track a predetermined leader, these problems can be classified into leaderless or leader-following consensus. Researchers have extensively studied both types of problems and developed consensus protocols. However, most current protocols only provide asymptotic consensus.

Some applications require exact consensus in a limited time or finite-time consensus. Achieving such a consensus results in improved control accuracy and stability. In practical applications, finite-time consensus requires considerable control effort. However, there are physical limitations to control effort, which if neglected, can degrade controller performance.

Studies have explored solutions for finite-time control methods subject to constraints, but most methods rely on homogeneity theory, in which ensuring convergence of consensus is difficult, and an exact settling time is hard to estimate.

Addressing these issues, a team of researchers, including Professor Zongyu Zuo, Mr. Jingchuan Tan, and Mr. Ruiqi Ke, all from Beihang University, China, and IEEE Fellow Professor Qing-Long Han from Swinburne University of Technology, Australia, developed a novel protocol structure for achieving global and semi-global finite-time consensus for both leaderless and leader-following multi-agent systems. Their study was published in the IEEE/CAA Journal of Automatica Sinica.

The team was motivated by a fascination with the potential of robotic systems and artificial intelligence to transform our daily lives and tackle complex societal challenges efficiently and sustainably. Prof. Zuo intuitively explains their work, “Imagine a group of dancers who need to perform a synchronized routine, without directly seeing each other, only following cues from those nearby. Our work is akin to creating a set of rules that helps these dancers synchronize perfectly in a short time, ensuring everyone performs beautifully together even if they have limitations in how quickly they can move.”

The protocols presented in the study use a hyperbolic tangent function, instead of the non-smooth saturation function used in traditional protocols. These protocols guarantee global and semi-global finite-time consensus for integrator and double integrator type systems, respectively. Moreover, they also allow explicit calculation of an upper limit for settling time and a user-prescribed bounded control level for closed-loop systems, making them highly practical and valuable for real-world applications.

Additionally, unlike traditional protocols, the hyperbolic tangent function avoids the need to determine input saturation for each agent, simplifying the design and stability analysis of the protocols. The researchers demonstrated the effectiveness of the new protocol structure through illustrative examples for single- and double-integrator multi-agent systems and by applying it to a practical system with multiple direct current motors.

Highlighting the practical applications of this study, Prof. Zuo says, “These protocols have broad applications, such as autonomous drone fleets for agricultural or surveillance tasks, coordinated control of robotic arms, and synchronized traffic light systems.

“Ultimately, our research could improve the efficiency and reliability of autonomous systems. For example, better traffic management systems could reduce congestion and pollution, while more coordinated disaster response robots could save lives during crises.”

Overall, the innovative protocol structure marks a significant achievement in the field of consensus problems, leading to enhanced multi-agent autonomous systems.

More information:
Zongyu Zuo et al, Hyperbolic Tangent Function-Based Protocols for Global/Semi-Global Finite-Time Consensus of Multi-Agent Systems, IEEE/CAA Journal of Automatica Sinica (2024). DOI: 10.1109/JAS.2024.124485

Provided by
Chinese Association of Automation

Citation:
Study presents novel protocol structure for achieving finite-time consensus of multi-agent systems (2024, June 12)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-protocol-finite-consensus-multi-agent.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

The YouTube algorithm isn’t radicalizing people, says bots study

0
The YouTube algorithm isn't radicalizing people, says bots study


youtube
Credit: Pixabay/CC0 Public Domain

About a quarter of Americans get their news on YouTube. With its billions of users and hours upon hours of content, YouTube is one the largest online media platforms in the world.

In recent years, there has been a popular narrative in the media that videos from highly partisan, conspiracy theory-driven YouTube channels radicalize young Americans and that YouTube’s recommendation algorithm leads users down a path of increasingly radical content.

However, a new study from the Computational Social Science Lab (CSSLab) at the University of Pennsylvania finds that users’ own political interests and preferences play the primary role in what they choose to watch. In fact, if the recommendation features have any impact on users’ media diets, it is a moderating one.

“On average, relying exclusively on the recommender results in less partisan consumption,” says lead author Homa Hosseinmardi, associate research scientist at the CSSLab.

YouTube bots

To determine the true effect of YouTube’s recommendation algorithm on what users watch, the researchers created bots that either followed the recommendation engines or completely ignored them. To do this, the researchers created bots trained on the YouTube watch history from a set of 87,988 real-life users collected from October 2021 to December 2022.

Hosseinmardi and co-authors Amir Ghasemian, Miguel Rivera-Lanas, Manoel Horta Ribeiro, Robert West, and Duncan J. Watts aimed to untangle the complex relationship between user preferences and the recommendation algorithm, a relationship that evolves with each video watched.

These bots were assigned individualized YouTube accounts so that their viewing history could be tracked, and the partisanship of what they watched was estimated using the metadata associated with each video.

During two experiments, the bots, each with its own YouTube account, went through a “learning phase”—they watched the same sequence of videos to ensure that they all presented the same preferences to YouTube’s algorithm.

Next, bots were placed into groups. Some bots continued to follow the watching history of the real life user it was trained on; others were assigned to be experimental “counterfactual bots”—bots following specific rules designed to separate user behavior from algorithmic influence.

In experiment one, after the learning phases, the control bot continued to watch videos from the user’s history, while counterfactual bots deviated from users’ real-life behavior and only selected videos from the list of recommended videos without taking the user preferences into account.

Some counterfactual bots always selected the first (“up next”) video from the sidebar recommendations; others randomly selected one of the top 30 videos listed in the sidebar recommendations; and others randomly selected a video from the top 15 videos in the homepage recommendations.

The researchers found that the counterfactual bots, on average, consumed less partisan content than the corresponding real user—a result that is stronger for heavier consumers of partisan content.

“This gap corresponds to an intrinsic preference of users for such content relative to what the algorithm recommends,” Hosseinmardi says. “The study exhibits similar moderating effects on bots consuming far-left content, or when bots are subscribed to channels on the extreme side of the political partisan spectrum.”

‘Forgetting time’ of recommendation algorithms

In experiment two, researchers aimed to estimate the “forgetting time” of the YouTube recommender.

“Recommendation algorithms have been criticized for continuing to recommend problematic content to previously interested users long after they have lost interest in it themselves,” Hosseinmardi says.

During this experiment, researchers calculated the recommender’s forgetting time for a user with a long (120 video) history of far-right video consumption who changes their diet to moderate news for the next 60 videos.

While the control bots continued watching a far-right diet for the whole experiment, counterfactual bots simulated a user “switching” from one set of preferences (watching far-right videos) to another (watching moderate videos). As the counterfactual bots changed their media preferences, the researchers tracked the average partisanship of recommended videos in the sidebar and homepage.

“On average, the recommended videos on the sidebar shifted toward moderate content after about 30 videos,” Hosseinmardi says, “while homepage recommendations tended to adjust less rapidly, showing homepage recommendations cater more to one’s preferences and sidebar recommendations are more related to the nature of the video currently being watched.”

“The YouTube recommendation algorithm has been accused of leading its users toward conspiratorial beliefs. While these accusations hold some merit, we must not overlook that users have a significant agency over their actions and may have viewed the same content, or worse, even without any recommendations,” Hosseinmardi says.

Moving forward, the researchers hope that others can adopt their method for studying AI-mediated platforms where user preferences and algorithms interact in order to understand better the role that algorithmic content recommendation engines play in our daily lives.

The findings are published in the journal Proceedings of the National Academy of Sciences.

More information:
Homa Hosseinmardi et al, Causally estimating the effect of YouTube’s recommender system using counterfactual bots, Proceedings of the National Academy of Sciences (2024). DOI: 10.1073/pnas.2313377121

Citation:
The YouTube algorithm isn’t radicalizing people, says bots study (2024, February 20)
retrieved 24 June 2024
from https://techxplore.com/news/2024-02-youtube-algorithm-isnt-radicalizing-people.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link