Friday, January 31, 2025
Home Blog Page 1652

New security loophole allows spying on internet users visiting websites and watching videos

0
New security loophole allows spying on internet users visiting websites and watching videos


New security loophole allows spying on internet users visiting websites and watching videos
The “SnailLoad” loophole is based on combining the latency of internet connections with the fingerprinting of online content. Credit: IAIK – TU Graz

Internet users leave many traces on websites and online services. Measures such as firewalls, VPN connections and browser privacy modes are in place to ensure a certain level of data protection. However, a newly discovered security loophole allows bypassing all of these protective measures.

Computer scientists from the Institute of Applied Information Processing and Communication Technology (IAIK) at Graz University of Technology (TU Graz) were able to track users’ online activities in detail simply by monitoring fluctuations in the speed of their internet connection. No malicious code is required to exploit this vulnerability, known as “SnailLoad,” and the data traffic does not need to be intercepted. All types of end devices and internet connections are affected.

The researchers have published their work in a paper titled “SnailLoad: Exploiting Remote Network Latency Measurements without JavaScript.”

Attackers track latency fluctuations in the internet connection via file transfer

Attackers only need to have had direct contact with the victim on a single occasion beforehand. On that occasion, the victim downloads a basically harmless, small file from the attacker’s server without realizing it—for example, while visiting a website or watching an advertising video.

As this file does not contain any malicious code, it cannot be recognized by security software. The transfer of this file is extremely slow, providing the attacker with continuous information about the latency variation of the victim’s internet connection. In further steps, this information is used to reconstruct the victim’s online activity.

‘SnailLoad’ combines latency data with fingerprinting of online content

“When the victim accesses a website, watches an online video or speaks to someone via video, the latency of the internet connection fluctuates in a specific pattern that depends on the particular content being used,” says Stefan Gast from the IAIK. This is because all online content has a unique fingerprint: For efficient transmission, online content is divided into small data packages that are sent one after the other from the host server to the user. The pattern of the number and size of these data packages is unique for each piece of online content—like a human fingerprint.

The researchers collected the fingerprints of a limited number of YouTube videos and popular websites in advance for testing purposes. When the test subjects used these videos and websites, the researchers were able to recognize this through the corresponding latency fluctuations.

“However, the attack would also work the other way round,” says Daniel Gruss from the IAIK. “Attackers first measure the pattern of latency fluctuations when a victim is online and then search for online content with the matching fingerprint.”

Slow internet connections make it easier for attackers

When spying on test subjects who were watching videos, the researchers achieved a success rate of up to 98%.

“The higher the data volume of the videos and the slower the victims’ internet connection, the better the success rate,” says Gruss. Consequently, the success rate for spying on basic websites dropped to around 63%.

“However, if attackers feed their machine learning models with more data than we did in our test, these values will certainly increase,” says Gruss.

Loophole virtually impossible to close

“Closing this security gap is difficult. The only option would be for providers to artificially slow down their customers’ internet connections in a randomized pattern,” says Gruss. However, this would lead to noticeable delays for time-critical applications such as video conferences, live streams or online computer games.

The team led by Gast and Gruss has set up a website describing SnailLoad in detail. They will present the scientific paper on the loophole at the conferences Black Hat U.S. 2024 and USENIX Security Symposium.

More information:
Stefan Gast et al, SnailLoad: Exploiting Remote Network Latency Measurements without JavaScript (2024)

Citation:
New security loophole allows spying on internet users visiting websites and watching videos (2024, June 24)
retrieved 25 June 2024
from https://techxplore.com/news/2024-06-loophole-spying-internet-users-websites.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Automated system teaches users when to collaborate with an AI assistant

0
Automated system teaches users when to collaborate with an AI assistant


Automated system teaches users when to collaborate with an AI assistant
The proposed onboarding approach with the IntegrAI algorithm. Credit: arXiv (2023). DOI: 10.48550/arxiv.2311.01007

Artificial intelligence models that pick out patterns in images can often do so better than human eyes—but not always. If a radiologist is using an AI model to help her determine whether a patient’s X-rays show signs of pneumonia, when should she trust the model’s advice and when should she ignore it?

A customized onboarding process could help this radiologist answer that question, according to researchers at MIT and the MIT-IBM Watson AI Lab. They designed a system that teaches a user when to collaborate with an AI assistant.

In this case, the training method might find situations where the radiologist trusts the model’s advice—except she shouldn’t because the model is wrong. The system automatically learns rules for how she should collaborate with the AI, and describes them with natural language.

During onboarding, the radiologist practices collaborating with the AI using training exercises based on these rules, receiving feedback about her performance and the AI’s performance.

The researchers found that this onboarding procedure led to about a 5 percent improvement in accuracy when humans and AI collaborated on an image prediction task. Their results also show that just telling the user when to trust the AI, without training, led to worse performance.

Importantly, the researchers’ system is fully automated, so it learns to create the onboarding process based on data from the human and AI performing a specific task. It can also adapt to different tasks, so it can be scaled up and used in many situations where humans and AI models work together, such as in social media content moderation, writing, and programming.

“So often, people are given these AI tools to use without any training to help them figure out when it is going to be helpful. That’s not what we do with nearly every other tool that people use—there is almost always some kind of tutorial that comes with it. But for AI, this seems to be missing. We are trying to tackle this problem from a methodological and behavioral perspective,” says Hussein Mozannar, a graduate student in the Social and Engineering Systems doctoral program within the Institute for Data, Systems, and Society (IDSS) and lead author of a paper about this training process.

The researchers envision that such onboarding will be a crucial part of training for medical professionals.

“One could imagine, for example, that doctors making treatment decisions with the help of AI will first have to do training similar to what we propose. We may need to rethink everything from continuing medical education to the way clinical trials are designed,” says senior author David Sontag, a professor of EECS, a member of the MIT-IBM Watson AI Lab and the MIT Jameel Clinic, and the leader of the Clinical Machine Learning Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

Mozannar, who is also a researcher with the Clinical Machine Learning Group, is joined on the paper by Jimin J. Lee, an undergraduate in electrical engineering and computer science; Dennis Wei, a senior research scientist at IBM Research; and Prasanna Sattigeri and Subhro Das, research staff members at the MIT-IBM Watson AI Lab. The paper is available on the arXiv preprint server and will be presented at the Conference on Neural Information Processing Systems.

Training that evolves

Existing onboarding methods for human-AI collaboration are often composed of training materials produced by human experts for specific use cases, making them difficult to scale up. Some related techniques rely on explanations, where the AI tells the user its confidence in each decision, but research has shown that explanations are rarely helpful, Mozannar says.

“The AI model’s capabilities are constantly evolving, so the use cases where the human could potentially benefit from it are growing over time. At the same time, the user’s perception of the model continues changing. So, we need a training procedure that also evolves over time,” he adds.

To accomplish this, their onboarding method is automatically learned from data. It is built from a dataset that contains many instances of a task, such as detecting the presence of a traffic light from a blurry image.

The system’s first step is to collect data on the human and AI performing this task. In this case, the human would try to predict, with the help of AI, whether blurry images contain traffic lights.

The system embeds these data points onto a latent space, which is a representation of data in which similar data points are closer together. It uses an algorithm to discover regions of this space where the human collaborates incorrectly with the AI. These regions capture instances where the human trusted the AI’s prediction but the prediction was wrong, and vice versa.

Perhaps the human mistakenly trusts the AI when images show a highway at night.

After discovering the regions, a second algorithm utilizes a large language model to describe each region as a rule, using natural language. The algorithm iteratively fine-tunes that rule by finding contrasting examples. It might describe this region as “ignore AI when it is a highway during the night.”

These rules are used to build training exercises. The onboarding system shows an example to the human, in this case a blurry highway scene at night, as well as the AI’s prediction, and asks the user if the image shows traffic lights. The user can answer yes, no, or use the AI’s prediction.

If the human is wrong, they are shown the correct answer and performance statistics for the human and AI on these instances of the task. The system does this for each region, and at the end of the training process, repeats the exercises the human got wrong.

“After that, the human has learned something about these regions that we hope they will take away in the future to make more accurate predictions,” Mozannar says.

Onboarding boosts accuracy

The researchers tested this system with users on two tasks—detecting traffic lights in blurry images and answering multiple choice questions from many domains (such as biology, philosophy, computer science, etc.).

They first showed users a card with information about the AI model, how it was trained, and a breakdown of its performance on broad categories. Users were split into five groups: Some were only shown the card, some went through the researchers’ onboarding procedure, some went through a baseline onboarding procedure, some went through the researchers’ onboarding procedure and were given recommendations of when they should or should not trust the AI, and others were only given the recommendations.

Only the researchers’ onboarding procedure without recommendations improved users’ accuracy significantly, boosting their performance on the traffic light prediction task by about 5 percent without slowing them down. However, onboarding was not as effective for the question-answering task. The researchers believe this is because the AI model, ChatGPT, provided explanations with each answer that convey whether it should be trusted.

But providing recommendations without onboarding had the opposite effect—users not only performed worse, they took more time to make predictions.

“When you only give someone recommendations, it seems like they get confused and don’t know what to do. It derails their process. People also don’t like being told what to do, so that is a factor as well,” Mozannar says.

Providing recommendations alone could harm the user if those recommendations are wrong, he adds. With onboarding, on the other hand, the biggest limitation is the amount of available data. If there aren’t enough data, the onboarding stage won’t be as effective, he says.

In the future, he and his collaborators want to conduct larger studies to evaluate the short- and long-term effects of onboarding. They also want to leverage unlabeled data for the onboarding process, and find methods to effectively reduce the number of regions without omitting important examples.

More information:
Hussein Mozannar et al, Effective Human-AI Teams via Learned Natural Language Rules and Onboarding, arXiv (2023). DOI: 10.48550/arxiv.2311.01007

Journal information:
arXiv


This story is republished courtesy of MIT News (web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and teaching.

Citation:
Automated system teaches users when to collaborate with an AI assistant (2023, December 7)
retrieved 25 June 2024
from https://techxplore.com/news/2023-12-automated-users-collaborate-ai.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Nuclear power has merits, but investing in renewable ensures long-term energy security, renewable energy expert says

0
Nuclear power has merits, but investing in renewable ensures long-term energy security, renewable energy expert says


nuclear power plant
Credit: Pixabay/CC0 Public Domain

Opposition Leader Peter Dutton has announced that he will go to the next election promising to build seven nuclear power stations. Dutton has promised the sites can be operational between 2035 and 2037, and will be built on retired or retiring coal stations.

However, Swinburne renewable energy expert Associate Professor Mehdi Seyedmahmoudian says that while the plan could have some merit, our energy system can already be transitioned without relying on nuclear power.

“Peter Dutton’s approach could potentially reduce costs and accelerate deployment by utilizing existing infrastructure. However, advancements in renewable energy sources such as solar, wind, and hydropower, combined with energy storage technologies, offer more sustainable and efficient alternatives,” said Associate Professor Seyedmahmoudian.

“Innovations in battery storage, hydrogen fuel cells, and hybrid energy systems are significantly enhancing the reliability and cost-effectiveness of renewable energy.”

Nuclear energy typically emits very little carbon dioxide, with Dutton’s promise linked to Australia’s goal of achieving net zero carbon emissions by 2050. However, Associate Professor Seyedmahmoudian says renewables are a much safer option.

“Smart grid technologies, community microgrids, and demand response management systems optimize energy distribution and consumption, facilitating the seamless integration of intermittent renewable sources. This reduces the need for large-scale, centralized power plants and addresses environmental and safety concerns associated with nuclear energy, such as radioactive waste management and potential catastrophic failures.”

The price tag for the opposition’s nuclear promise is unknown, with Dutton confirming “comprehensive site studies” would be needed before a cost could be revealed.

Associate Professor Seyedmahmoudian says renewable energy is the answer for a cheaper and more secure energy future.

“By investing in research, innovation, and infrastructure for renewable energy and smart grid technologies, we can achieve a reliable, sustainable, and cost-effective energy transition without relying on nuclear power, ensuring long-term energy security and environmental sustainability.”

Citation:
Nuclear power has merits, but investing in renewable ensures long-term energy security, renewable energy expert says (2024, June 19)
retrieved 25 June 2024
from https://techxplore.com/news/2024-06-nuclear-power-merits-investing-renewable.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Drone racing prepares neural-network AI for space

0
Drone racing prepares neural-network AI for space


Drone racing prepares neural-network AI for space
A drone takes off inside TUDelft’s Cyber Zoo, its path shown by composite pictures taken by high speed cameras. Credit: ESA/TU Delft

Drones are being raced against the clock at Delft University of Technology’s “Cyber Zoo” to test the performance of neural-network-based AI control systems planned for next-generation space missions.

The research—undertaken by ESA’s Advanced Concepts Team together with the Micro Air Vehicle Laboratory, MAVLab, of TUDelft—is detailed in the latest issue of Science Robotics.

“Through a long-term collaboration, we’ve been looking into the use of trainable neural networks for the autonomous oversight of all kinds of demanding spacecraft maneuvers, such as interplanetary transfers, surface landings and dockings,” notes Dario Izzo, scientific coordinator of ESA’s ACT.

“In space every onboard resource must be utilized as efficiently as possible—including propellant, available energy, computing resources, and often time. Such a neural network approach could enable optimal onboard operations, boosting mission autonomy and robustness. But we needed a way to test it in the real world, ahead of planning actual space missions.

“That’s when we settled on drone racing as the ideal gym environment to test end-to-end neural architectures on real robotic platforms, to increase confidence in their future use in space.”

Drones have been competing to achieve the best time through a set course within the Cyber Zoo at TU Delft, a 10×10 m test area maintained by the University’s Faculty of Aerospace Engineering, ESA’s partner in this research. Human-steered “Micro Air Vehicle” quadcopters were alternated with autonomous counterparts with neural networks trained in various ways.







Credit: European Space Agency

“The traditional way that spacecraft maneuvers work is that they are planned in detail on the ground then uploaded to the spacecraft to be carried out,” explains ACT Young Graduate Trainee Sebastien Origer. “Essentially, when it comes to mission Guidance and Control, the Guidance part occurs on the ground, while the Control part is undertaken by the spacecraft.”

The space environment is inherently unpredictable, however, with the potential for all kinds of unforeseen factors and noise, such as gravitational variations, atmospheric turbulence or planetary bodies that turn out to be shaped differently from on-ground modeling.

Whenever the spacecraft deviates from its planned path for whatever reason, its control system works to return it to the set profile. The problem is that such an approach can be quite costly in resource terms, requiring a whole set of brute force corrections.

Sebastien adds, “Our alternative end-to-end Guidance & Control Networks, G&C Nets, approach involves all the work taking place on the spacecraft. Instead of sticking a single set course, the spacecraft continuously replans its optimal trajectory, starting from the current position it finds itself at, which proves to be much more efficient.”

Drone racing prepares neural-network AI for space
Drones are being raced against the clock at Delft University of Technology’s “Cyber Zoo” to test the performance of neural-network-based AI control systems planned for next-generation space missions. Credit: ESA/TU Delft

In computer simulations, neural nets composed of interlinked neurons—mimicking the setup of animal brains—performed well when trained using “behavioral cloning,” based on prolonged exposure to expert examples. But then came the question of how to build trust in this approach in the real world. At this point, the researchers turned to drones.

“There’s quite a lot of synergies between drones and spacecraft, although the dynamics involved in flying drones are much faster and noisier,” comments Dario.

“When it comes to racing, obviously the main scarce resource is time, but we can use that as a substitute for other variables that a space mission might have to prioritize, such as propellant mass.

“Satellite CPUs are quite constrained, but our G&CNETs are surprisingly modest, perhaps storing up to 30 000 parameters in memory, which can be done using only a few hundred kilobytes, involving less than 360 neurons in all.”

  • Drone racing prepares neural-network AI for space
    Drones have been competing to achieve the best time through a set course within the Cyber Zoo at TU Delft, a 10×10 m test area maintained by the university’s faculty of Aerospace Engineering, ESA’s partner in this research. Human-steered “‘Micro Air Vehicle'” quadcopters were alternated with autonomous counterparts with neural networks trained in various ways. The partners have been testing the performance of neural-network-based AI control systems planned for next-generation space missions. Credit: European Space Agency
  • Drone racing prepares neural-network AI for space
    Optimality principles determine the decision-making during different phases of exploration missions. Credit: Science Robotics (2024). DOI: 10.1126/scirobotics.adi6421

In order to be optimal, the G&CNet should be able to send commands directly to the actuators. For a spacecraft, these are the thrusters and, in the case of drones, their propellers.

“The main challenge that we tackled for bringing G&CNets to drones is the reality gap between the actuators in simulation and in reality,” says Christophe De Wagter, principal investigator at TU Delft.

“We deal with this by identifying the reality gap while flying and teaching the neural network to deal with it. For example, if the propellers give less thrust than expected, the drone can notice this via its accelerometers. The neural network will then regenerate the commands to follow the new optimal path.”

“There’s a whole academic community of drone racing, and it all comes down to winning races,” says Sebastien. “For our G&CNets approach, the use of drones represents a way to build trust, develop a solid theoretical framework and establish safety bounds, ahead of planning an actual space mission demonstrator.”

More information:
Dario Izzo et al, Optimality principles in spacecraft neural guidance and control, Science Robotics (2024). DOI: 10.1126/scirobotics.adi6421

Citation:
Drone racing prepares neural-network AI for space (2024, June 20)
retrieved 25 June 2024
from https://phys.org/news/2024-06-drone-neural-network-ai-space.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Researchers ‘crack the code’ for quelling electromagnetic interference

0
Researchers ‘crack the code’ for quelling electromagnetic interference


FAU Center for Connected Autonomy and Artificial Intelligence highlighted in 'Nature Reviews'
Equipped with a breakthrough algorithmic solution, researchers have “cracked the code” on interference when machines need to talk with each other—and people. Credit: Alex Dolce, Florida Atlantic University

Florida Atlantic Center for Connected Autonomy and Artificial Intelligence (CA-AI.fau.edu) researchers have “cracked the code” on interference when machines need to talk with each other—and people.

Electromagnetic waves make wireless connectivity possible but create a lot of unwanted chatter. Referred to as “electromagnetic interference,” this noisy byproduct of wireless communications poses formidable challenges in modern day dense Internet of Things and AI robotic environments. With the demand for lightning-fast data rates reaching unprecedented levels, the need to quell this interference is more pressing than ever.

Equipped with a breakthrough algorithmic solution, researchers from FAU Center for Connected Autonomy and AI, within the College of Engineering and Computer Science, and FAU Institute for Sensing and Embedded Network Systems Engineering (I-SENSE), have figured out a way to do that.

Their method, which is a first, dynamically fine-tunes multiple-input multiple-output (MIMO) links, a cornerstone of modern-day wireless systems such as Wi-Fi and cellular networks.

The researchers’ approach, published in a special issue of the journal IEEE Journal on Selected Areas in Communications and featured as a research highlight in Nature Reviews, demonstrates how their algorithmic method sculpts wireless waveforms to navigate the crowded frequency band. By simultaneously optimizing transmission in space and time, this algorithm could pave the way for pristine communication channels.

In field demonstrations, the researchers dynamically optimized MIMO wireless waveform shapes over a given frequency band to manage and avoid interference in machine-to-machine communications and showed the effectiveness of this method in real-world scenarios where interference is a common problem.

“We have pioneered the conceptual and practical groundwork for machines outfitted with multiple antennas to autonomously determine the most effective waveform shapes in both time and space domains for communication within a designated frequency band, even among extremely challenging interference and disturbances,” said Dimitris Pados, Ph.D., senior author, professor, director of the CA-AI and a fellow of I-SENSE in the Department of Electrical Engineering and Computer Science.

“By employing dynamic waveform machine learning in tandem across space and time, we believe that we have ‘cracked the code’ on mitigating electromagnetic interference.”

Researchers first conducted extensive simulations to validate the efficacy of this method against a barrage of interference scenarios from near-field to far-field and in both light and dense interference scenarios. These simulations highlighted the ability of the optimized waveforms, particularly joint space-time optimization, to maintain “clean” communications in extreme mixed-interference environments.

“In the realm of autonomous systems and machine-to-machine communications, secure, reliable and ‘clean’ communications are paramount, underscoring the importance of this breakthrough research at Florida Atlantic,” said Stella Batalama, Ph.D., dean, FAU College of Engineering and Computer Science.

“In the midst of chaos in modern communication, this innovative research offers a very promising avenue to address interference challenges in machine-to-machine communications where there are high volumes of devices and multiple networks.”

More information:
Sanaz Naderi et al, Self-Optimizing Near and Far-Field MIMO Transmit Waveforms, IEEE Journal on Selected Areas in Communications (2024). DOI: 10.1109/JSAC.2024.3389123

Citation:
Researchers ‘crack the code’ for quelling electromagnetic interference (2024, June 21)
retrieved 25 June 2024
from https://techxplore.com/news/2024-06-code-quelling-electromagnetic.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link