Thursday, January 30, 2025
Home Blog Page 1644

Researchers develop novel method for compactly implementing image-recognizing AI

0
Researchers develop novel method for compactly implementing image-recognizing AI


Novel method for compactly implementing image-recognizing AI
Researchers proposes a novel heuristic compression method for convolutional neural network model applying three conventional reduction techniques in the sequence of the integer quantization, the network sliming, and the deep compression. The method autonomously finds a minimal size of a network model by iterating the margin calculations. Credit: IEEE Access (2024). DOI: 10.1109/ACCESS.2024.3399541

Artificial intelligence (AI) technology used in image recognition possesses a structure mimicking human vision and brain neurons. There are three known methods to reduce the amount of data required for calculating and computing the visual and neuronal components. Until now, the application ratio of these methods was determined through trial and error.

Researchers at the University of Tsukuba have developed a new algorithm that automatically identifies the optimal proportion of each method. This algorithm is expected to decrease power consumption in AI technologies and contribute to the miniaturization of semiconductors.

Convolutional neural networks (CNNs) are pivotal in applications such as facial recognition at airport immigration and object detection in autonomous vehicles.

CNNs are composed of convolutional and fully connected layers; the former simulates human vision, while the latter enables the brain to deduce the type of image from visual data.

By reducing the number of data bits used in computations, CNNs can maintain recognition accuracy while substantially reducing computational demands. This efficiency allows the supporting hardware to be more compact.

Three reduction methods have been identified so far: network slimming (NS) to minimize the visual components, deep compression (DC) to reduce the neuronal components, and integer quantization (IQ) to decrease the number of bits used. Previously, there was no definitive guideline on the order of implementation or allocation of these methods.

The new study, published in IEEE Access, establishes that the optimal sequence of these methods for minimizing the data amount is IQ, followed by NS and DC. In addition, the researchers have created an algorithm that determines the application ratio of each method autonomously, removing the necessity for trial and error.

This algorithm enables a CNN to be compressed to 28 times smaller and 76 times faster than previous models.

The implications of this research are poised to transform AI image recognition technology by dramatically reducing computational complexity, power consumption, and the size of AI semiconductor devices. This breakthrough will likely enhance the widespread feasibility of deploying advanced AI systems.

More information:
Danhe Tian et al, Heuristic Compression Method for CNN Model Applying Quantization to a Combination of Structured and Unstructured Pruning Techniques, IEEE Access (2024). DOI: 10.1109/ACCESS.2024.3399541

Citation:
Researchers develop novel method for compactly implementing image-recognizing AI (2024, June 6)
retrieved 25 June 2024
from https://techxplore.com/news/2024-06-method-compactly-image-ai.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

New tech addresses augmented reality’s privacy problem

0
New tech addresses augmented reality's privacy problem


New tech addresses augmented reality's privacy problem
(From left) Bo Ji, Brendan David-John, and graduate student Matthew Corbett devised a new method to protect bystander privacy in augmented reality. Credit: Kelly Izlar/Virginia Tech

An emergency room doctor using augmented reality could save precious seconds by quickly checking a patient’s vitals or records. But the doctor could also unintentionally pull information for someone else in the room, breaching privacy and health care laws.

A Commonwealth Cyber Initiative team created a technique called BystandAR to protect bystander privacy while still providing an immersive augmented-reality experience. The researchers presented the technology at ACM MobiSys 2023 last summer and explained it in a December IEEE Security & Privacy article.

“Protecting bystander privacy is an important problem,” said Bo Ji, associate professor of computer science. “Our work raises awareness and encourages the adoption of augmented reality in the future.”

Early results, which were presented last summer, correctly identified and protected more than 98% of bystanders within the data stream while allowing access to more than 96% of the subject data. In addition, BystandAR does not require offloading unprotected bystander data to another device for analysis, which presented a further risk for privacy leakage.

With support from Virginia Tech Intellectual Properties and the LINK + LICENSE + LAUNCH Proof of Concept Program, the team filed a provisional patent on BystandAR, which distorts bystanders’ images in augmented-reality devices.

Concerns about privacy violations contributed to the failure of Google Glass almost a decade ago. Like similar devices, the eyewear projects interactive computer-generated content, such as video, graphics, or GPS data, onto a user’s view of the world. But Google Glass’ cameras and microphones allowed users to record their surroundings without the consent of those around them.

“It made people uncomfortable, and for good reason,” Ji said. “Maybe you’re in a restaurant with your kids. You have no control over who is collecting their data or what happens to it.”

BystandAR builds on a key insight from psychological studies: An individual usually looks most directly and longest at the person they are interacting with. Therefore, eye gaze is a highly effective indicator for differentiating between bystander and subject in a social context. Ji’s technique leverages eye-gaze tracking, near-field microphone, and spatial awareness to detect and obscure bystanders captured within sensor data in real time.

In a related work, for which he recently received a $1.2 million National Science Foundation award, Ji is developing a method to improve the efficiency and performance of next-generation wireless networks so that more people can take advantage of seamless, immersive augmented reality experiences.

“Although these are two separate projects, you can think of it as a part of the same effort to improve augmented reality from both sides—ensuring privacy for individual users locally, and improving the network to provide a seamless, secure, and functional experience globally,” Ji said.

More information:
Matthew Corbett et al, Securing Bystander Privacy in Mixed Reality While Protecting the User Experience, IEEE Security & Privacy (2023). DOI: 10.1109/MSEC.2023.3331649

Matthew Corbett et al, BystandAR: Protecting Bystander Visual Data in Augmented Reality Systems, Proceedings of the 21st Annual International Conference on Mobile Systems, Applications and Services (2023). DOI: 10.1145/3581791.3596830

Provided by
Virginia Tech


Citation:
New tech addresses augmented reality’s privacy problem (2024, January 17)
retrieved 25 June 2024
from https://techxplore.com/news/2024-01-tech-augmented-reality-privacy-problem.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Amazon to invest extra 10 bn euros in Germany

0
Amazon to invest extra 10 bn euros in Germany


amazon
Credit: Unsplash/CC0 Public Domain

Amazon said Wednesday it will invest an additional 10 billion euros ($10.7 billion) in Germany, most of it in cloud computing, the US tech giant’s latest major investment in Europe.

A total of 8.8 billion euros will come from Amazon’s cloud computing division AWS and will be invested in southwest Germany by 2026, with the rest going into logistics, robotics and company offices.

The investment comes on top of 7.8 billion euros announced last month by AWS towards building a “sovereign cloud” center in Germany.

The first sovereign cloud complex will be set up in the state of Brandenburg, and will be operational by the end of 2025.

The new system is to address concerns of some European countries and public agencies, which have been reluctant to resort to cloud computing for fear data would be transferred to other jurisdictions, notably the United States.

With its recently announced investments, Amazon said it was hiring thousands of new workers in Germany, taking its total number of permanent employees in the country to about 40,000 by the end of the year.

Chancellor Olaf Scholz said on X that the investments showed that Germany remained “an attractive business location”.

“As the government, we are working on precisely this: strengthening our competitiveness,” he said.

Germany, Europe’s top economy, sees attracting new investments in high-tech fields as crucial as it struggles to emerge from a period of weakness.

The tech giant has also in recent times announced major investments to expand data centers in Spain and to develop cloud infrastructure and logistical infrastructure of its parcel delivery system in France.

A pioneer of e-commerce, Amazon’s AWS also dominates cloud computing with 31 percent of the market at the end of 2023, according to Stocklytics.

But rivals Microsoft and Google are gaining ground.

© 2024 AFP

Citation:
Amazon to invest extra 10 bn euros in Germany (2024, June 19)
retrieved 25 June 2024
from https://techxplore.com/news/2024-06-amazon-invest-extra-bn-euros.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Meta’s AI can translate dozens of under-resourced languages

0
Meta's AI can translate dozens of under-resourced languages


Meta's AI can translate dozens of under-resourced languages
Architecture of the LASER3 teacher-student approach. Credit: Nature (2024). DOI: 10.1038/s41586-024-07335-x

The technology behind Meta’s artificial intelligence model, which can translate 200 different languages, is described in a paper published in Nature. The model expands the number of languages that can be translated via machine translation.

Neural machine translation models utilize artificial neural networks to translate languages. These models typically need a large amount of accessible data online to train with, which may not be publicly, cheaply, or commonly available for some languages, termed “low-resource languages.” Increasing a model’s linguistic output in terms of the number of languages it translates could negatively affect the quality of the model’s translations.

Marta Costa-jussà and the No Language Left Behind (NLLB) team have developed a cross-language approach, which allows neural machine translation models to learn how to translate low-resource languages using their pre-existing ability to translate high-resource languages.

As a result, the researchers have developed an online multilingual translation tool, called NLLB-200, that includes 200 languages, contains three times as many low-resource languages as high-resource languages, and performs 44% better than pre-existing systems.

Given that the researchers only had access to 1,000–2,000 samples of many low-resource languages, to increase the volume of training data for NLLB-200 they utilized a language identification system to identify more instances of those given dialects. The team also mined bilingual textual data from Internet archives, which helped improve the quality of translations NLLB-200 provided.

The authors note that this tool could help people speaking rarely translated languages to access the Internet and other technologies. Additionally, they highlight education as a particularly significant application, as the model could help those speaking low-resource languages access more books and research articles. However, Costa-jussà and co-authors acknowledge that mistranslations may still occur.

More information:
Scaling neural machine translation to 200 languages, Nature (2024). DOI: 10.1038/s41586-024-07335-x

Citation:
Meta’s AI can translate dozens of under-resourced languages (2024, June 7)
retrieved 25 June 2024
from https://techxplore.com/news/2024-06-meta-ai-dozens-resourced-languages.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Smart devices’ ambient light sensors pose imaging privacy risk

0
Smart devices' ambient light sensors pose imaging privacy risk


Smart devices' ambient light sensors pose imaging privacy risk
CSAIL uncovered that ambient light sensors are vulnerable to privacy threats when embedded on a smart device’s screen. Credit: Alex Shipps/MIT CSAIL

In George Orwell’s novel “1984,” Big Brother watches citizens through two-way, TV-like telescreens to surveil citizens without any cameras. In a similar fashion, our current smart devices contain ambient light sensors, which open the door to a different threat: Hackers.

These passive, seemingly innocuous smartphone components receive light from the environment and adjust the screen’s brightness accordingly, like when your phone automatically dims in a bright room. Unlike cameras, though, apps are not required to ask for permission to use these sensors.

In a surprising discovery, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) uncovered that ambient light sensors are vulnerable to privacy threats when embedded on a smart device’s screen.

The team proposed a computational imaging algorithm to recover an image of the environment from the perspective of the display screen using subtle single-point light intensity changes of these sensors to demonstrate how hackers could use them in tandem with monitors.

The paper was published in Science Advances earlier this January.

“This work turns your device’s ambient light sensor and screen into a camera! Ambient light sensors are tiny devices deployed in almost all portable devices and screens that surround us in our daily lives,” says Princeton University professor Felix Heide, who was not involved with the paper. “As such, the authors highlight a privacy threat that affects a comprehensive class of devices and has been overlooked so far.”

While phone cameras have previously been exposed as security threats for recording user activity, the MIT group found that ambient light sensors can capture images of users’ touch interactions without a camera. According to their new study, these sensors can eavesdrop on regular gestures, like scrolling, swiping, or sliding, and capture how users interact with their phones while watching videos. For example, apps with native access to your screen, including video players and web browsers, could spy on you to gather this permission-free data.

According to the researchers, the commonly held belief is that ambient light sensors don’t reveal meaningful private information to hackers, so programming apps to request access to them is unnecessary. “Many believe that these sensors should always be turned on,” says lead author Yang Liu, MIT Electrical Engineering & Computer Science Department (EECS) and CSAIL Ph.D. student.

“But much like the telescreen, ambient light sensors can passively capture what we’re doing without our permission, while apps are required to request access to our cameras. Our demonstrations show that when combined with a display screen, these sensors could pose some sort of imaging privacy threat by providing that information to hackers monitoring your smart devices.”

Collecting these images requires a dedicated inversion process where the ambient light sensor first collects low-bitrate variations in light intensity, partially blocked by the hand making contact with the screen. Next, the outputs are mapped into a two-dimensional space by forming an inverse problem with the knowledge of the screen content. An algorithm then reconstructs the picture from the screen’s perspective, which is iteratively optimized and denoised via deep learning to reveal a pixelated image of hand activity.

The study introduces a novel combination of passive sensors and active monitors to reveal a previously unexplored imaging threat that could expose the environment in front of the screen to hackers processing the sensor data from another device. “This imaging privacy threat has never been demonstrated before,” says Liu, who worked alongside Frédo Durand on the paper, who is an MIT EECS professor, CSAIL member, and senior author.

The team suggested two software mitigation measures for operating system providers: tightening up permissions and reducing the precision and speed of the sensors. First, they recommend restricting access to the ambient light sensor by allowing users to approve or deny those requests from apps. To further prevent any privacy threats, the team also proposed limiting the capabilities of the sensors.

By reducing the precision and speed of these components, the sensors would reveal less private information. From the hardware side, the ambient light sensor should not be directly facing the user on any smart device, they argued, but instead placed on the side where it won’t capture any significant touch interactions.

Getting the picture

The inversion process was applied to three demonstrations using an Android tablet. In the first test, the researchers seated a mannequin in front of the device, while different hands made contact with the screen. A human hand pointed to the screen, and later, a cardboard cutout resembling an open-hand gesture touched the monitor, with the pixelated imprints gathered by the MIT team revealing the physical interactions with the screen.

A subsequent demo with human hands revealed that the way users slide, scroll, pinch, swipe, and rotate could be gradually captured by hackers through the same imaging method, although only at a speed of one frame every 3.3 minutes. With a faster ambient light sensor, malicious actors could potentially eavesdrop on user interactions with their devices in real time.

In a third demo, the group found that users are also at risk when watching videos like films and short clips. A human hand hovered in front of the sensor while scenes from Tom and Jerry played on screen, with a white board behind the user reflecting light to the device. The ambient light sensor captured the subtle intensity changes for each video frame with the resulting images exposing touch gestures.

While the vulnerabilities in ambient light sensors pose a threat, such a hack is still restricted. The speed of this privacy issue is low, with the current image retrieval rate being 3.3 minutes per frame, which overwhelms the dwell of user interactions. Additionally, these pictures are still a bit blurry if retrieved from a natural video, potentially leading to future research. While telescreens can capture objects away from the screen, this imaging privacy issue is only confirmed for objects that make contact with a mobile device’s screen, much like how selfie cameras cannot capture objects out of frame.

More information:
Yang Liu et al, Imaging privacy threats from an ambient light sensor, Science Advances (2024). DOI: 10.1126/sciadv.adj3608

Citation:
Smart devices’ ambient light sensors pose imaging privacy risk (2024, January 17)
retrieved 25 June 2024
from https://techxplore.com/news/2024-01-smart-devices-ambient-sensors-pose.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link