Wednesday, December 25, 2024
Home Blog Page 1381

Meta’s AI can translate dozens of under-resourced languages

0
Meta's AI can translate dozens of under-resourced languages


Meta's AI can translate dozens of under-resourced languages
Architecture of the LASER3 teacher-student approach. Credit: Nature (2024). DOI: 10.1038/s41586-024-07335-x

The technology behind Meta’s artificial intelligence model, which can translate 200 different languages, is described in a paper published in Nature. The model expands the number of languages that can be translated via machine translation.

Neural machine translation models utilize artificial neural networks to translate languages. These models typically need a large amount of accessible data online to train with, which may not be publicly, cheaply, or commonly available for some languages, termed “low-resource languages.” Increasing a model’s linguistic output in terms of the number of languages it translates could negatively affect the quality of the model’s translations.

Marta Costa-jussà and the No Language Left Behind (NLLB) team have developed a cross-language approach, which allows neural machine translation models to learn how to translate low-resource languages using their pre-existing ability to translate high-resource languages.

As a result, the researchers have developed an online multilingual translation tool, called NLLB-200, that includes 200 languages, contains three times as many low-resource languages as high-resource languages, and performs 44% better than pre-existing systems.

Given that the researchers only had access to 1,000–2,000 samples of many low-resource languages, to increase the volume of training data for NLLB-200 they utilized a language identification system to identify more instances of those given dialects. The team also mined bilingual textual data from Internet archives, which helped improve the quality of translations NLLB-200 provided.

The authors note that this tool could help people speaking rarely translated languages to access the Internet and other technologies. Additionally, they highlight education as a particularly significant application, as the model could help those speaking low-resource languages access more books and research articles. However, Costa-jussà and co-authors acknowledge that mistranslations may still occur.

More information:
Scaling neural machine translation to 200 languages, Nature (2024). DOI: 10.1038/s41586-024-07335-x

Citation:
Meta’s AI can translate dozens of under-resourced languages (2024, June 7)
retrieved 25 June 2024
from https://techxplore.com/news/2024-06-meta-ai-dozens-resourced-languages.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Smart devices’ ambient light sensors pose imaging privacy risk

0
Smart devices' ambient light sensors pose imaging privacy risk


Smart devices' ambient light sensors pose imaging privacy risk
CSAIL uncovered that ambient light sensors are vulnerable to privacy threats when embedded on a smart device’s screen. Credit: Alex Shipps/MIT CSAIL

In George Orwell’s novel “1984,” Big Brother watches citizens through two-way, TV-like telescreens to surveil citizens without any cameras. In a similar fashion, our current smart devices contain ambient light sensors, which open the door to a different threat: Hackers.

These passive, seemingly innocuous smartphone components receive light from the environment and adjust the screen’s brightness accordingly, like when your phone automatically dims in a bright room. Unlike cameras, though, apps are not required to ask for permission to use these sensors.

In a surprising discovery, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) uncovered that ambient light sensors are vulnerable to privacy threats when embedded on a smart device’s screen.

The team proposed a computational imaging algorithm to recover an image of the environment from the perspective of the display screen using subtle single-point light intensity changes of these sensors to demonstrate how hackers could use them in tandem with monitors.

The paper was published in Science Advances earlier this January.

“This work turns your device’s ambient light sensor and screen into a camera! Ambient light sensors are tiny devices deployed in almost all portable devices and screens that surround us in our daily lives,” says Princeton University professor Felix Heide, who was not involved with the paper. “As such, the authors highlight a privacy threat that affects a comprehensive class of devices and has been overlooked so far.”

While phone cameras have previously been exposed as security threats for recording user activity, the MIT group found that ambient light sensors can capture images of users’ touch interactions without a camera. According to their new study, these sensors can eavesdrop on regular gestures, like scrolling, swiping, or sliding, and capture how users interact with their phones while watching videos. For example, apps with native access to your screen, including video players and web browsers, could spy on you to gather this permission-free data.

According to the researchers, the commonly held belief is that ambient light sensors don’t reveal meaningful private information to hackers, so programming apps to request access to them is unnecessary. “Many believe that these sensors should always be turned on,” says lead author Yang Liu, MIT Electrical Engineering & Computer Science Department (EECS) and CSAIL Ph.D. student.

“But much like the telescreen, ambient light sensors can passively capture what we’re doing without our permission, while apps are required to request access to our cameras. Our demonstrations show that when combined with a display screen, these sensors could pose some sort of imaging privacy threat by providing that information to hackers monitoring your smart devices.”

Collecting these images requires a dedicated inversion process where the ambient light sensor first collects low-bitrate variations in light intensity, partially blocked by the hand making contact with the screen. Next, the outputs are mapped into a two-dimensional space by forming an inverse problem with the knowledge of the screen content. An algorithm then reconstructs the picture from the screen’s perspective, which is iteratively optimized and denoised via deep learning to reveal a pixelated image of hand activity.

The study introduces a novel combination of passive sensors and active monitors to reveal a previously unexplored imaging threat that could expose the environment in front of the screen to hackers processing the sensor data from another device. “This imaging privacy threat has never been demonstrated before,” says Liu, who worked alongside Frédo Durand on the paper, who is an MIT EECS professor, CSAIL member, and senior author.

The team suggested two software mitigation measures for operating system providers: tightening up permissions and reducing the precision and speed of the sensors. First, they recommend restricting access to the ambient light sensor by allowing users to approve or deny those requests from apps. To further prevent any privacy threats, the team also proposed limiting the capabilities of the sensors.

By reducing the precision and speed of these components, the sensors would reveal less private information. From the hardware side, the ambient light sensor should not be directly facing the user on any smart device, they argued, but instead placed on the side where it won’t capture any significant touch interactions.

Getting the picture

The inversion process was applied to three demonstrations using an Android tablet. In the first test, the researchers seated a mannequin in front of the device, while different hands made contact with the screen. A human hand pointed to the screen, and later, a cardboard cutout resembling an open-hand gesture touched the monitor, with the pixelated imprints gathered by the MIT team revealing the physical interactions with the screen.

A subsequent demo with human hands revealed that the way users slide, scroll, pinch, swipe, and rotate could be gradually captured by hackers through the same imaging method, although only at a speed of one frame every 3.3 minutes. With a faster ambient light sensor, malicious actors could potentially eavesdrop on user interactions with their devices in real time.

In a third demo, the group found that users are also at risk when watching videos like films and short clips. A human hand hovered in front of the sensor while scenes from Tom and Jerry played on screen, with a white board behind the user reflecting light to the device. The ambient light sensor captured the subtle intensity changes for each video frame with the resulting images exposing touch gestures.

While the vulnerabilities in ambient light sensors pose a threat, such a hack is still restricted. The speed of this privacy issue is low, with the current image retrieval rate being 3.3 minutes per frame, which overwhelms the dwell of user interactions. Additionally, these pictures are still a bit blurry if retrieved from a natural video, potentially leading to future research. While telescreens can capture objects away from the screen, this imaging privacy issue is only confirmed for objects that make contact with a mobile device’s screen, much like how selfie cameras cannot capture objects out of frame.

More information:
Yang Liu et al, Imaging privacy threats from an ambient light sensor, Science Advances (2024). DOI: 10.1126/sciadv.adj3608

Citation:
Smart devices’ ambient light sensors pose imaging privacy risk (2024, January 17)
retrieved 25 June 2024
from https://techxplore.com/news/2024-01-smart-devices-ambient-sensors-pose.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Amazon shifting to recycled paper filling for packages in North America

0
Amazon shifting to recycled paper filling for packages in North America


So long plastic air pillows: Amazon shifting to recycled paper filling for packages in North America
The Amazon logo is seen, June 15, 2023, at the Vivatech show in Paris. Amazon is moving from putting plastic air pillows in its packages to using recycled paper filling instead, a move that’s more environmentally friendly and secures items in boxes better. The company said Thursday, June 20, 2024 that it’s already replaced 95% of the plastic air fillers with paper filler in North America and is working toward complete removal by year’s end. Credit: AP Photo/Michel Euler, File

Amazon is shifting from the plastic air pillows used for packaging in North America to recycled paper because it’s more environmentally sound, and it says paper just works better.

The company said Thursday that it’s already replaced 95% of the plastic air pillows with paper filler in North America and is working toward complete removal by year’s end.

“We want to ensure that customers receive their items undamaged, while using as little packaging as possible to avoid waste, and prioritizing recyclable materials,” Amazon said.

It is the company’s largest plastic packaging reduction effort in North America to date and will remove almost 15 billion plastic air pillows from use annually.

Almost all customer deliveries for Prime Day this year, which happens next month, will contain plastic no air pillows, according to Amazon.

The e-commerce giant has faced years of criticism about its use of plastic from environmental groups, including a nonprofit called Oceana, which has been releasing its own reports on Amazon’s use of plastic packaging.

Matt Littlejohn, senior vice president of strategic initiatives at Oceana, said that Amazon’s efforts to reduce plastic packaging is welcome news, but that there’s still more that the company can do.

“While this is a significant step forward for the company, Amazon needs to build on this momentum and fulfill its multiyear commitment to transition its North America fulfillment centers away from plastic,” Littlejohn said in a prepared statement. “Then, the company should expand these efforts and also push innovations like reusable packaging to move away from single-use packaging everywhere it sells and ships.”

There has also been broad support among Amazon investors who have urged the company to outline how will will reduce waste.

The company disclosed the total of single-use plastic across global operations for the first time in 2022 after investors sought more details on plans to reduce waste. The company said that it used 85,916 metric tons of single-use plastic that year, an 11.6% decrease from 2021.

Amazon began transition away from plastic air pillows in October at an automated fulfillment center in Ohio. The company said that it was able to test and learn at the center there, which helped it move quickly on transitioning to recycled paper filling.

The transition process included changing out machinery and training employees on new systems and machines.

Amazon discovered through testing that the paper filler, which is made from 100% recyclable content and is curbside recyclable, offers the same, if not better protection during shipping compared with plastic air pillows, the company said.

Christian Garcia, who works at Amazon’s fulfillment center in Bakersfield, California, said in a release that the paper filler is easier to work with and that the machinery gives staff more space so that it’s easier to pack orders.

Ongoing efforts to reduce waste include a campaign to ship items without any additional packaging, the company said. In 2022, 11% of all of Amazon’s packages shipped worldwide were without added delivery packaging.

Other efforts include piloting new technology with artificial intelligence and robotics company Glacier to use AI-powered robots to automate the sorting of recyclables and collect real-time data on recycling streams for companies. It’s also partnering with the U.S. Department of Energy on new materials and recycling programs.

© 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.

Citation:
So long plastic air pillows: Amazon shifting to recycled paper filling for packages in North America (2024, June 20)
retrieved 25 June 2024
from https://techxplore.com/news/2024-06-plastic-air-pillows-amazon-shifting.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Singapore study finds close to 5 in 10 say they would take air taxis in the future

0
Singapore study finds close to 5 in 10 say they would take air taxis in the future


air drones
Credit: Rodolfo Quirós from Pexels

A study by researchers from Nanyang Technological University, Singapore (NTU Singapore) has found that Singaporeans are open to riding air taxis, which are small autonomous aircraft that carry passengers over short distances. Through a study of 1,002 participants, the NTU Singapore team found that almost half (45.7%) say they intend to use this mode of transport when it becomes available, with over one-third (36.2%) planning to do so regularly.

According to the findings published online in the journal Technology in Society in April, the intention to take autonomous air taxis is associated with factors such as trust in the AI technology deployed in air taxis, hedonic motivation (the fun or pleasure derived from using technology), performance expectancy (the degree to which users expect that using the system will benefit them), and news media attention (the amount of attention paid to news about air taxis).

Air taxis and autonomous drone services are close to becoming a reality. China’s aviation authority issued its first safety approval certification last year to a Chinese drone maker for trial operations, and in Europe, authorities are working to certify air taxis safe to serve passengers at the Paris Olympics this year. For Singapore, which is looking to become a base for air taxi companies, the study findings could help the sector achieve lift-off, said the research team from NTU’s Wee Kim Wee School of Communication and Information (WKWSCI) led by Professor Shirley Ho.

Professor Ho, who is also NTU’s Associate Vice President for Humanities, Social Sciences & Research Communication, said, “Even though air taxis have yet to be deployed in Singapore, close to half of those surveyed said they would be keen to take air taxis in the future. This signifies a positive step forward for a nascent technology.

“Our study represents a significant step forward in understanding the factors that influence one’s intention to take air taxis. Insights into the public perception of air taxis will enable policymakers and tech developers to design targeted interventions that encourage air taxi use as they look to build up an air taxi industry in Singapore.”

The study aligns with NTU’s goal of pursuing research aligned with national priorities and with the potential for significant intellectual and societal impact, as articulated in the NTU 2025 five-year strategic plan. To gauge the public perception of air taxis, the NTU WKWSCI team surveyed 1,002 Singaporeans and permanent residents, drawing on a validated model that measures technology acceptance and use and the factors driving this behavior.

Participants were asked to score on a five-point scale in response to various statements about factors such as their trust in the AI system used in air taxis, their attention to news reports on air taxis, their perceived ease of use and usefulness of air taxis, as well as their attitudes and intention to take air taxis in the future.

The scores for each participant were then tabulated and used in statistical analyses to find out how these factors related to the participant’s intention to take air taxis.

‘Generally positive’ sentiment about air taxis

Upon tabulating the scores, the researchers found that sentiments around air taxis are generally positive among the participants. Almost half (45.7%) said they intend to use this mode of transport when it becomes available. Close to four in 10 (36.2%) said they plan to do so regularly. Close to six in 10 (57%) thought taking air taxis would be fun, and 53% said they were excited about taking air taxis.

Six in 10 (60.9%) agreed that taking air taxis would help to get things done more quickly, and 61.2% believed that it would increase productivity. Half the participants also trusted the competency of the AI technology used in air taxis, and the AI engineers building the technology. Five in 10 (52.9%) agreed that the AI system in air taxis would be competent and effective at helping to transport people.

Factors that predict air taxi use

Upon conducting statistical analyses on the survey data, the researchers found that the following factors directly impacted participants’ intention to take air taxis: news media attention; trust in the AI system used in air taxis; attitude towards air taxis; performance expectancy; hedonic motivation; price value; social influence; and habit (the perception that taking air taxis could become a habit).

These findings suggest that when Singaporeans consider whether they would use autonomous air taxis, not only do they value the practical aspects of the technology, but also how much they can trust the AI system, said NTU WKWSCI’s Ph.D. student Justin Cheung, a co-author of the study.

Surprisingly, habit was the most robust predictor of people’s intention to use air taxis, despite the relatively smaller number of participants who agreed that taking the vehicles would become a habit for them, he said. This suggests that while the user base for autonomous passenger drones may be small, it could be a loyal one, he added.

Another robust predictor of use intention was attention to news media. In addition, the researchers found that news media attention could shape intentions to use air taxis and attitudes towards them by influencing trust in the AI systems, as well as the engineers who develop the AI systems behind air taxis.

Prof Ho said, “When technologies are yet to be deployed in the public sphere, news media offers the main and, in many instances, the only source of information for members of the public. Our findings suggest that policymakers could leverage positive news media reporting when introducing air taxis to shape public perceptions and thereby use intention.”

Credibility affects trust in media reports on AI technology

These findings build on a study authored by Prof Ho and WKWSCI research fellow Goh Tong Jee. Published in Science Communication in May, the study identified considerations that could affect the public’s trust in media organizations, policymakers and tech developers that introduce AI in autonomous vehicles (AVs).

Through six focus group discussions with 56 drivers and non-drivers, the researchers found that media credibility is a foundation upon which the public would evaluate the trustworthiness of media organizations. The focus group discussion participants said they would consider qualities such as balance, comprehensiveness, persuasiveness and objectivity of media organizations when assessing their ability to create quality content. The researchers also found that non-drivers raised more qualities than drivers regarding trust in media organizations.

The researchers attributed this observation to the enthusiasm non-drivers could have over the prospective use of AVs, which drove the non-drivers’ tendency to seek information. Some qualities raised only by non-drivers during the focus group discussions include a media organization’s ability to spur discussions on whether AV is a need or a want.

Another consideration is a media organization’s ability to create varied content. Non-drivers also shared their expectations that media organizations should be transparent and reveal “unflattering” information in the public’s interest during crises, even if it means affecting the reputation of policymakers or tech developers.

The findings from these two studies reaffirm the need for accurate and balanced reporting on AVs such as air taxis, due to the role news media can play in shaping public perception, and the public’s expectations of media organizations, according to Prof Ho.

“The two studies highlight the importance for media organizations to translate emerging scientific evidence accurately to facilitate informed decision- making. Given the speed at which innovative technologies emerge in the age of digitalization, accurate science communication has never been more crucial.”

More information:
Shirley S. Ho et al, Trust in artificial intelligence, trust in engineers, and news media: Factors shaping public perceptions of autonomous drones through UTAUT2, Technology in Society (2024). DOI: 10.1016/j.techsoc.2024.102533

Tong Jee Goh et al, Trustworthiness of Policymakers, Technology Developers, and Media Organizations Involved in Introducing AI for Autonomous Vehicles: A Public Perspective, Science Communication (2024). DOI: 10.1177/10755470241248169

Citation:
Singapore study finds close to 5 in 10 say they would take air taxis in the future (2024, May 28)
retrieved 25 June 2024
from https://techxplore.com/news/2024-05-singapore-air-taxis-future.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Advanced AI-based techniques scale-up solving complex combinatorial optimization problems

0
Advanced AI-based techniques scale-up solving complex combinatorial optimization problems


Advanced AI-based techniques scale-up solving complex combinatorial optimization problems
HypOp methods. a,b, Hypergraph modeling (a) and distributed training of HyperGNN (b) in HypOp. Credit: Nature Machine Intelligence (2024). DOI: 10.1038/s42256-024-00833-7

A framework based on advanced AI techniques can solve complex, computationally intensive problems faster and in a more scalable way than state-of-the-art methods, according to a study led by engineers at the University of California San Diego.

In the paper, which was published in Nature Machine Intelligence, researchers present HypOp, a framework that uses unsupervised learning and hypergraph neural networks. The framework is able to solve combinatorial optimization problems significantly faster than existing methods. HypOp is also able to solve certain combinatorial problems that can’t be solved as effectively by prior methods.

“In this paper, we tackle the difficult task of addressing combinatorial optimization problems that are paramount in many fields of science and engineering,” said Nasimeh Heydaribeni, the paper’s corresponding author and a postdoctoral scholar in the UC San Diego Department of Electrical and Computer Engineering. She is part of the research group of Professor Farinaz Koushanfar, who co-directs the Center for Machine-Intelligence, Computing and Security at the UC San Diego Jacobs School of Engineering. Professor Tina Eliassi-Rad from Northeastern University also collaborated with the UC San Diego team on this project.

One example of a relatively simple combinatorial problem is figuring out how many and what kind of goods to stock at specific warehouses in order to consume the least amount of gas when delivering these goods.

HypOp can be applied to a broad spectrum of challenging real-world problems, with applications in drug discovery, chip design, logic verification, logistics and more. These are all combinatorial problems with a wide range of variables and constraints that make them extremely difficult to solve. That is because in these problems, the size of the underlying search space for finding potential solutions increases exponentially rather than in a linear fashion with respect to the problem size.

HypOp can solve these complex problems in a more scalable manner by using a new distributed algorithm that allows multiple computation units on the hypergraph to solve the problem together, in parallel, more efficiently.

HypOp introduces new problem embedding leveraging hypergraph neural networks, which have higher order connections than traditional graph neural networks, to better model the problem constraints and solve them more proficiently. can also transfer learning from one problem to help solve other, seemingly different problems more effectively. HypOp includes an additional fine-tuning step, which leads to finding more accurate solutions than the prior existing methods.

The code for HypOp is available here.

Below, the UC San Diego research team on this paper breaks down the findings for a broader audience though a short Q&A.

You note in the press release that HypOp also transfer-learns from one type of problem objective to help solve other cost functions more effectively. For a non-technical expert, is there more to say about this phenomenon that is relevant to the larger conversation about how AI is empowering researchers to solve problems and make discoveries that would otherwise be impossible?

HypOp’s ability to transfer-learn from one problem to assist in solving others is a prime example of how AI can introduce a paradigm shift in research and discovery. This capability, known as transfer learning, allows AI systems to consign knowledge gained from solving one problem to new but related problems with a different cost function, making them more versatile and efficient.

For non-technical experts, consider how human expertise works. For instance, learning piano creates a comprehensive musical foundation that makes learning guitar faster and more effective. The transferable skills include music theory knowledge, reading proficiency, rhythmic understanding, finger dexterity, and aural abilities. These skills collectively enhance the learning experience and lead to quicker and better mastery of the guitar for someone who already knows how to play the piano. In comparison, a novice music student would have a much longer learning curve.

This synergy between human intelligence and AI amplifies researchers’ ability to address complex, interdisciplinary challenges and drive progress in ways that were previously unimaginable. That is one reason why we are very excited about HypOp’s advancements and contributions.

There is a lot of conversation in many different circles about using machine learning and artificial intelligence to help researchers make discoveries faster, or even to make discoveries that would otherwise be impossible. For people who may not understand all the technical details of your new paper, how influential do you believe this new approach, HypOp, will be in terms of how AI is used in problem solving and research?

The overarching concept is that learning the pertinent problem structure can greatly enhance the quality and speed of combinatorial optimization problems. HypOp’s particular methodology holds a significant potential for influencing the way AI is applied in problem solving and research. By leveraging hypergraph neural networks (HyperGNNs), HypOp extends the capabilities of traditional graph neural networks to scalably tackle higher-order constrained combinatorial optimization problems. This advancement is crucial because many real-world problems involve complex constraints and interactions that go beyond simple pairwise relationships that have been suggested earlier.

The code for HypOp is available online. Do you expect people will start using the code right away to solve combinatorial optimization problems? Or is there more work to be done before people can start using the code?

Yes, people can start using the HypOp open-source code right away to solve large-scale combinatorial optimization problems.

What problems is HypOp able to solve that other methods can’t tackle?

HypOp can solve large-scale optimization problems with generic objective functions and constraints. Most of the existing solvers can only solve problems with specific objective functions such as linear or quadratic functions and can only model pairwise constraints. Moreover, HypOp leverages distributed training techniques which enables it to scale to substantial problem instances.

What are the next steps in terms of research for HypOp?

We are focused on extending the generalizability and scalability of HypOp. We are doing so by designing other advanced AI techniques that are capable of learning from addressing smaller problem instances and generalizing to larger problem cases.

More information:
Nasimeh Heydaribeni et al, Distributed constrained combinatorial optimization leveraging hypergraph neural networks, Nature Machine Intelligence (2024). DOI: 10.1038/s42256-024-00833-7

Citation:
Advanced AI-based techniques scale-up solving complex combinatorial optimization problems (2024, June 10)
retrieved 25 June 2024
from https://techxplore.com/news/2024-06-advanced-ai-based-techniques-scale.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link