Thursday, January 30, 2025
Home Blog Page 1644

New tech addresses augmented reality’s privacy problem

0
New tech addresses augmented reality's privacy problem


New tech addresses augmented reality's privacy problem
(From left) Bo Ji, Brendan David-John, and graduate student Matthew Corbett devised a new method to protect bystander privacy in augmented reality. Credit: Kelly Izlar/Virginia Tech

An emergency room doctor using augmented reality could save precious seconds by quickly checking a patient’s vitals or records. But the doctor could also unintentionally pull information for someone else in the room, breaching privacy and health care laws.

A Commonwealth Cyber Initiative team created a technique called BystandAR to protect bystander privacy while still providing an immersive augmented-reality experience. The researchers presented the technology at ACM MobiSys 2023 last summer and explained it in a December IEEE Security & Privacy article.

“Protecting bystander privacy is an important problem,” said Bo Ji, associate professor of computer science. “Our work raises awareness and encourages the adoption of augmented reality in the future.”

Early results, which were presented last summer, correctly identified and protected more than 98% of bystanders within the data stream while allowing access to more than 96% of the subject data. In addition, BystandAR does not require offloading unprotected bystander data to another device for analysis, which presented a further risk for privacy leakage.

With support from Virginia Tech Intellectual Properties and the LINK + LICENSE + LAUNCH Proof of Concept Program, the team filed a provisional patent on BystandAR, which distorts bystanders’ images in augmented-reality devices.

Concerns about privacy violations contributed to the failure of Google Glass almost a decade ago. Like similar devices, the eyewear projects interactive computer-generated content, such as video, graphics, or GPS data, onto a user’s view of the world. But Google Glass’ cameras and microphones allowed users to record their surroundings without the consent of those around them.

“It made people uncomfortable, and for good reason,” Ji said. “Maybe you’re in a restaurant with your kids. You have no control over who is collecting their data or what happens to it.”

BystandAR builds on a key insight from psychological studies: An individual usually looks most directly and longest at the person they are interacting with. Therefore, eye gaze is a highly effective indicator for differentiating between bystander and subject in a social context. Ji’s technique leverages eye-gaze tracking, near-field microphone, and spatial awareness to detect and obscure bystanders captured within sensor data in real time.

In a related work, for which he recently received a $1.2 million National Science Foundation award, Ji is developing a method to improve the efficiency and performance of next-generation wireless networks so that more people can take advantage of seamless, immersive augmented reality experiences.

“Although these are two separate projects, you can think of it as a part of the same effort to improve augmented reality from both sides—ensuring privacy for individual users locally, and improving the network to provide a seamless, secure, and functional experience globally,” Ji said.

More information:
Matthew Corbett et al, Securing Bystander Privacy in Mixed Reality While Protecting the User Experience, IEEE Security & Privacy (2023). DOI: 10.1109/MSEC.2023.3331649

Matthew Corbett et al, BystandAR: Protecting Bystander Visual Data in Augmented Reality Systems, Proceedings of the 21st Annual International Conference on Mobile Systems, Applications and Services (2023). DOI: 10.1145/3581791.3596830

Provided by
Virginia Tech


Citation:
New tech addresses augmented reality’s privacy problem (2024, January 17)
retrieved 25 June 2024
from https://techxplore.com/news/2024-01-tech-augmented-reality-privacy-problem.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Amazon to invest extra 10 bn euros in Germany

0
Amazon to invest extra 10 bn euros in Germany


amazon
Credit: Unsplash/CC0 Public Domain

Amazon said Wednesday it will invest an additional 10 billion euros ($10.7 billion) in Germany, most of it in cloud computing, the US tech giant’s latest major investment in Europe.

A total of 8.8 billion euros will come from Amazon’s cloud computing division AWS and will be invested in southwest Germany by 2026, with the rest going into logistics, robotics and company offices.

The investment comes on top of 7.8 billion euros announced last month by AWS towards building a “sovereign cloud” center in Germany.

The first sovereign cloud complex will be set up in the state of Brandenburg, and will be operational by the end of 2025.

The new system is to address concerns of some European countries and public agencies, which have been reluctant to resort to cloud computing for fear data would be transferred to other jurisdictions, notably the United States.

With its recently announced investments, Amazon said it was hiring thousands of new workers in Germany, taking its total number of permanent employees in the country to about 40,000 by the end of the year.

Chancellor Olaf Scholz said on X that the investments showed that Germany remained “an attractive business location”.

“As the government, we are working on precisely this: strengthening our competitiveness,” he said.

Germany, Europe’s top economy, sees attracting new investments in high-tech fields as crucial as it struggles to emerge from a period of weakness.

The tech giant has also in recent times announced major investments to expand data centers in Spain and to develop cloud infrastructure and logistical infrastructure of its parcel delivery system in France.

A pioneer of e-commerce, Amazon’s AWS also dominates cloud computing with 31 percent of the market at the end of 2023, according to Stocklytics.

But rivals Microsoft and Google are gaining ground.

© 2024 AFP

Citation:
Amazon to invest extra 10 bn euros in Germany (2024, June 19)
retrieved 25 June 2024
from https://techxplore.com/news/2024-06-amazon-invest-extra-bn-euros.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Meta’s AI can translate dozens of under-resourced languages

0
Meta's AI can translate dozens of under-resourced languages


Meta's AI can translate dozens of under-resourced languages
Architecture of the LASER3 teacher-student approach. Credit: Nature (2024). DOI: 10.1038/s41586-024-07335-x

The technology behind Meta’s artificial intelligence model, which can translate 200 different languages, is described in a paper published in Nature. The model expands the number of languages that can be translated via machine translation.

Neural machine translation models utilize artificial neural networks to translate languages. These models typically need a large amount of accessible data online to train with, which may not be publicly, cheaply, or commonly available for some languages, termed “low-resource languages.” Increasing a model’s linguistic output in terms of the number of languages it translates could negatively affect the quality of the model’s translations.

Marta Costa-jussà and the No Language Left Behind (NLLB) team have developed a cross-language approach, which allows neural machine translation models to learn how to translate low-resource languages using their pre-existing ability to translate high-resource languages.

As a result, the researchers have developed an online multilingual translation tool, called NLLB-200, that includes 200 languages, contains three times as many low-resource languages as high-resource languages, and performs 44% better than pre-existing systems.

Given that the researchers only had access to 1,000–2,000 samples of many low-resource languages, to increase the volume of training data for NLLB-200 they utilized a language identification system to identify more instances of those given dialects. The team also mined bilingual textual data from Internet archives, which helped improve the quality of translations NLLB-200 provided.

The authors note that this tool could help people speaking rarely translated languages to access the Internet and other technologies. Additionally, they highlight education as a particularly significant application, as the model could help those speaking low-resource languages access more books and research articles. However, Costa-jussà and co-authors acknowledge that mistranslations may still occur.

More information:
Scaling neural machine translation to 200 languages, Nature (2024). DOI: 10.1038/s41586-024-07335-x

Citation:
Meta’s AI can translate dozens of under-resourced languages (2024, June 7)
retrieved 25 June 2024
from https://techxplore.com/news/2024-06-meta-ai-dozens-resourced-languages.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Smart devices’ ambient light sensors pose imaging privacy risk

0
Smart devices' ambient light sensors pose imaging privacy risk


Smart devices' ambient light sensors pose imaging privacy risk
CSAIL uncovered that ambient light sensors are vulnerable to privacy threats when embedded on a smart device’s screen. Credit: Alex Shipps/MIT CSAIL

In George Orwell’s novel “1984,” Big Brother watches citizens through two-way, TV-like telescreens to surveil citizens without any cameras. In a similar fashion, our current smart devices contain ambient light sensors, which open the door to a different threat: Hackers.

These passive, seemingly innocuous smartphone components receive light from the environment and adjust the screen’s brightness accordingly, like when your phone automatically dims in a bright room. Unlike cameras, though, apps are not required to ask for permission to use these sensors.

In a surprising discovery, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) uncovered that ambient light sensors are vulnerable to privacy threats when embedded on a smart device’s screen.

The team proposed a computational imaging algorithm to recover an image of the environment from the perspective of the display screen using subtle single-point light intensity changes of these sensors to demonstrate how hackers could use them in tandem with monitors.

The paper was published in Science Advances earlier this January.

“This work turns your device’s ambient light sensor and screen into a camera! Ambient light sensors are tiny devices deployed in almost all portable devices and screens that surround us in our daily lives,” says Princeton University professor Felix Heide, who was not involved with the paper. “As such, the authors highlight a privacy threat that affects a comprehensive class of devices and has been overlooked so far.”

While phone cameras have previously been exposed as security threats for recording user activity, the MIT group found that ambient light sensors can capture images of users’ touch interactions without a camera. According to their new study, these sensors can eavesdrop on regular gestures, like scrolling, swiping, or sliding, and capture how users interact with their phones while watching videos. For example, apps with native access to your screen, including video players and web browsers, could spy on you to gather this permission-free data.

According to the researchers, the commonly held belief is that ambient light sensors don’t reveal meaningful private information to hackers, so programming apps to request access to them is unnecessary. “Many believe that these sensors should always be turned on,” says lead author Yang Liu, MIT Electrical Engineering & Computer Science Department (EECS) and CSAIL Ph.D. student.

“But much like the telescreen, ambient light sensors can passively capture what we’re doing without our permission, while apps are required to request access to our cameras. Our demonstrations show that when combined with a display screen, these sensors could pose some sort of imaging privacy threat by providing that information to hackers monitoring your smart devices.”

Collecting these images requires a dedicated inversion process where the ambient light sensor first collects low-bitrate variations in light intensity, partially blocked by the hand making contact with the screen. Next, the outputs are mapped into a two-dimensional space by forming an inverse problem with the knowledge of the screen content. An algorithm then reconstructs the picture from the screen’s perspective, which is iteratively optimized and denoised via deep learning to reveal a pixelated image of hand activity.

The study introduces a novel combination of passive sensors and active monitors to reveal a previously unexplored imaging threat that could expose the environment in front of the screen to hackers processing the sensor data from another device. “This imaging privacy threat has never been demonstrated before,” says Liu, who worked alongside Frédo Durand on the paper, who is an MIT EECS professor, CSAIL member, and senior author.

The team suggested two software mitigation measures for operating system providers: tightening up permissions and reducing the precision and speed of the sensors. First, they recommend restricting access to the ambient light sensor by allowing users to approve or deny those requests from apps. To further prevent any privacy threats, the team also proposed limiting the capabilities of the sensors.

By reducing the precision and speed of these components, the sensors would reveal less private information. From the hardware side, the ambient light sensor should not be directly facing the user on any smart device, they argued, but instead placed on the side where it won’t capture any significant touch interactions.

Getting the picture

The inversion process was applied to three demonstrations using an Android tablet. In the first test, the researchers seated a mannequin in front of the device, while different hands made contact with the screen. A human hand pointed to the screen, and later, a cardboard cutout resembling an open-hand gesture touched the monitor, with the pixelated imprints gathered by the MIT team revealing the physical interactions with the screen.

A subsequent demo with human hands revealed that the way users slide, scroll, pinch, swipe, and rotate could be gradually captured by hackers through the same imaging method, although only at a speed of one frame every 3.3 minutes. With a faster ambient light sensor, malicious actors could potentially eavesdrop on user interactions with their devices in real time.

In a third demo, the group found that users are also at risk when watching videos like films and short clips. A human hand hovered in front of the sensor while scenes from Tom and Jerry played on screen, with a white board behind the user reflecting light to the device. The ambient light sensor captured the subtle intensity changes for each video frame with the resulting images exposing touch gestures.

While the vulnerabilities in ambient light sensors pose a threat, such a hack is still restricted. The speed of this privacy issue is low, with the current image retrieval rate being 3.3 minutes per frame, which overwhelms the dwell of user interactions. Additionally, these pictures are still a bit blurry if retrieved from a natural video, potentially leading to future research. While telescreens can capture objects away from the screen, this imaging privacy issue is only confirmed for objects that make contact with a mobile device’s screen, much like how selfie cameras cannot capture objects out of frame.

More information:
Yang Liu et al, Imaging privacy threats from an ambient light sensor, Science Advances (2024). DOI: 10.1126/sciadv.adj3608

Citation:
Smart devices’ ambient light sensors pose imaging privacy risk (2024, January 17)
retrieved 25 June 2024
from https://techxplore.com/news/2024-01-smart-devices-ambient-sensors-pose.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Amazon shifting to recycled paper filling for packages in North America

0
Amazon shifting to recycled paper filling for packages in North America


So long plastic air pillows: Amazon shifting to recycled paper filling for packages in North America
The Amazon logo is seen, June 15, 2023, at the Vivatech show in Paris. Amazon is moving from putting plastic air pillows in its packages to using recycled paper filling instead, a move that’s more environmentally friendly and secures items in boxes better. The company said Thursday, June 20, 2024 that it’s already replaced 95% of the plastic air fillers with paper filler in North America and is working toward complete removal by year’s end. Credit: AP Photo/Michel Euler, File

Amazon is shifting from the plastic air pillows used for packaging in North America to recycled paper because it’s more environmentally sound, and it says paper just works better.

The company said Thursday that it’s already replaced 95% of the plastic air pillows with paper filler in North America and is working toward complete removal by year’s end.

“We want to ensure that customers receive their items undamaged, while using as little packaging as possible to avoid waste, and prioritizing recyclable materials,” Amazon said.

It is the company’s largest plastic packaging reduction effort in North America to date and will remove almost 15 billion plastic air pillows from use annually.

Almost all customer deliveries for Prime Day this year, which happens next month, will contain plastic no air pillows, according to Amazon.

The e-commerce giant has faced years of criticism about its use of plastic from environmental groups, including a nonprofit called Oceana, which has been releasing its own reports on Amazon’s use of plastic packaging.

Matt Littlejohn, senior vice president of strategic initiatives at Oceana, said that Amazon’s efforts to reduce plastic packaging is welcome news, but that there’s still more that the company can do.

“While this is a significant step forward for the company, Amazon needs to build on this momentum and fulfill its multiyear commitment to transition its North America fulfillment centers away from plastic,” Littlejohn said in a prepared statement. “Then, the company should expand these efforts and also push innovations like reusable packaging to move away from single-use packaging everywhere it sells and ships.”

There has also been broad support among Amazon investors who have urged the company to outline how will will reduce waste.

The company disclosed the total of single-use plastic across global operations for the first time in 2022 after investors sought more details on plans to reduce waste. The company said that it used 85,916 metric tons of single-use plastic that year, an 11.6% decrease from 2021.

Amazon began transition away from plastic air pillows in October at an automated fulfillment center in Ohio. The company said that it was able to test and learn at the center there, which helped it move quickly on transitioning to recycled paper filling.

The transition process included changing out machinery and training employees on new systems and machines.

Amazon discovered through testing that the paper filler, which is made from 100% recyclable content and is curbside recyclable, offers the same, if not better protection during shipping compared with plastic air pillows, the company said.

Christian Garcia, who works at Amazon’s fulfillment center in Bakersfield, California, said in a release that the paper filler is easier to work with and that the machinery gives staff more space so that it’s easier to pack orders.

Ongoing efforts to reduce waste include a campaign to ship items without any additional packaging, the company said. In 2022, 11% of all of Amazon’s packages shipped worldwide were without added delivery packaging.

Other efforts include piloting new technology with artificial intelligence and robotics company Glacier to use AI-powered robots to automate the sorting of recyclables and collect real-time data on recycling streams for companies. It’s also partnering with the U.S. Department of Energy on new materials and recycling programs.

© 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.

Citation:
So long plastic air pillows: Amazon shifting to recycled paper filling for packages in North America (2024, June 20)
retrieved 25 June 2024
from https://techxplore.com/news/2024-06-plastic-air-pillows-amazon-shifting.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link