Wednesday, November 27, 2024
Home Blog Page 1083

AI can help forecast toxic ‘blue-green tides’

0
AI can help forecast toxic ‘blue-green tides’


AI predicts toxic blue-green tides forecast
HABs are a common occurrence in western Lake Erie. Credit: U.S. Geological Survey

A team of Los Alamos National Laboratory scientists plan to use artificial intelligence modeling to forecast, and better understand, a growing threat to water caused by toxic algal blooms. Fueled by climate change and rising water temperatures, these harmful algal blooms, or HABs, have grown in intensity and frequency. They have now been reported in all 50 U.S. states.

“Harmful algal blooms are appearing in areas where, historically, they were never present,” said Babetta Marrone, senior scientist at the Lab and the project’s team lead. “The ecosystem of organisms that cause these blooms is very complex. And the information we do have about when, and why, these blooms form is dispersed through a variety of local, state, federal, and international databases. This is one area where we believe AI can help.”

Each year, so-called “red tides” and “blue-green tides” close beaches and lakes, kill untold number of aquatic animals, and cause billions of dollars in economic damage. Scientists need modern tools to reliably understand the physical, chemical, and biological processes that dictate HAB toxicity and prevalence to predict and mitigate these outbreaks. The Los Alamos team has detailed a process through which artificial intelligence models can help unravel these mysteries.

Understanding the HAB ecosystem

Researchers have collected data on HABs since 1954. For decades, scientists have understood that elevated water temperatures, combined with sudden infusions of nutrients (often phosphorous and nitrogen runoff from industrial farming), tend to precede a HAB event. This sudden imbalance of nutrients can lead to the explosive growth of cyanobacteria, which occurs naturally in freshwater.

Under these conditions, cyanobacterial species such as Microcystis aeruginosa can form dense blankets on the water surface, eventually releasing microcystin, a toxin that can sicken or kill organisms including fish, wildlife, and humans.

AI can help forecast toxic 'blue-green tides'
Abstract. Credit: ACS ES&T Water (2023). DOI: 10.1021/acsestwater.3c00369

But what causes toxic cyanobacteria to prevail in these freshwater ecosystems has proved challenging to understand. Cyanobacterial HABs are complex ecosystems influenced by hundreds—sometimes thousands—of other microorganisms.

“Large genomic datasets of cyanobacterial HABs are becoming more available,” Marrone said. “Our team plans to mine these datasets with machine learning and artificial intelligence models to understand the relationship between cyanobacteria and the many other microorganisms present in the water body over the course of the algal blooms. This will let us identify the key functional relationships that cause toxin production.”

A path toward forecasting

Another major impediment to understanding, and thus forecasting, algal blooms is the data itself. Existing research has been independently collected by a variety of organizations across the nation and the world, some of it by citizen scientist groups. Much of this data was sampled with varying instruments, then logged in different formats.

In their recent publication, Marrone’s team outline how AI and machine learning models can decipher and analyze this disparate data. This would allow scientists to better understand the conditions that create HABs, the first step in forecasting these outbreaks.

“Our goal is to feed existing information into a model that takes advantage of data gleaned from water sampling, weather telemetry stations, satellite sensing data, and the newly emerging biological data,” Marrone said. “Such a model could then be used to forecast algal blooms, and possibly even predict how climate change will alter their intensity and frequency in the future.”

The research has been published in the journal ACS ES&T Water.

More information:
Babetta L. Marrone et al, Toward a Predictive Understanding of Cyanobacterial Harmful Algal Blooms through AI Integration of Physical, Chemical, and Biological Data, ACS ES&T Water (2023). DOI: 10.1021/acsestwater.3c00369

Citation:
AI can help forecast toxic ‘blue-green tides’ (2024, June 20)
retrieved 29 June 2024
from https://phys.org/news/2024-06-ai-toxic-blue-green-tides.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Amazon is reviewing whether Perplexity AI improperly scraped online content

0
Amazon is reviewing whether Perplexity AI improperly scraped online content


Amazon is reviewing whether Perplexity AI improperly scraped online content
People are reflected in a window of a hotel at the Davos Promenade in Davos, Switzerland, Jan. 15, 2024. The artificial intelligence startup Perplexity AI has raised tens of millions of dollars from the likes of Jeff Bezos and other prominent tech investors for its mission to rival Google in the business of searching for information. Credit: AP Photo/Markus Schreiber, File

Amazon is reviewing claims that the artificial intelligence startup Perplexity AI is scraping content—including from prominent news sites—without approval.

Amazon spokesperson Samantha Mayowa confirmed Friday that the tech giant was assessing information it received from the news outlet WIRED, which published an investigation earlier this month that said Perplexity appeared to scrape content from websites that had prohibited access from such practices. Perplexity uses servers by Amazon Web Services, otherwise known as AWS.

Amazon’s “terms of service prohibit abusive and illegal activities and our customers are responsible for complying with those terms,” Mayowa said in a prepared statement. “We routinely receive reports of alleged abuse from a variety of sources and engage our customers to understand those reports.”

Perplexity spokesperson Sara Platnick said Friday that the company had determined that Perplexity-controlled services are not crawling websites in any way that violates AWS terms of service.

The San Francisco-based AI search startup has been a darling of prominent tech investors, including heavy hitters such as Amazon founder Jeff Bezos. But in the past few weeks, the company has found itself in hot water amid accusations of plagiarism.

Perplexity CEO Aravind Srinivas has offered a robust defense of the startup after it published a summarized news story with information and similar wording to a Forbes investigative story. It did so without citing the media outlet or asking for its permission. Forbes later said it found similar “knock-off” stories lifted from other publications.

Separately, The Associated Press found another Perplexity product invented fake quotes from real people.

Srinivas said in an AP interview earlier this month that his company “never ripped off content from anybody. Our engine is not training on anyone else’s content,” in part because the company is simply aggregating what other companies’ AI systems generate.

But, he added, “It was accurately pointed out by Forbes that they preferred a more prominent highlighting of the source.” He said sources are now highlighted more prominently.

© 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.

Citation:
Amazon is reviewing whether Perplexity AI improperly scraped online content (2024, June 29)
retrieved 29 June 2024
from https://techxplore.com/news/2024-06-amazon-perplexity-ai-improperly-online.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Scammers can abuse security flaws in email forwarding to impersonate high-profile domains

0
Scammers can abuse security flaws in email forwarding to impersonate high-profile domains


Scammers can abuse security flaws in email forwarding to impersonate high-profile domains
Researchers were able to spoof a wide range of email addresses. Credit: University of California San Diego

Sending an email with a forged address is easier than previously thought, due to flaws in the process that allows email forwarding, according to a research team led by computer scientists at the University of California San Diego.

The issues researchers uncovered have a broad impact, affecting the integrity of email sent from tens of thousands of domains, including those representing organizations in the U.S. government—such as the majority of U.S. cabinet email domains, including state.gov, as well as security agencies. Key financial service companies, such as Mastercard, and major news organizations, such as The Washington Post and the Associated Press, are also vulnerable.

It’s called forwarding-based spoofing and researchers found that they can send email messages impersonating these organizations, bypassing the safeguards deployed by email providers such as Gmail and Outlook. Once recipients get the spoofed email, they are more likely to open attachments that deploy malware, or to click on links that install spyware on their machine.

Such spoofing is made possible by a number of vulnerabilities centered on forwarding emails, the research team found. The original protocol used to check the authenticity of an email implicitly assumes that each organization operates its own mailing infrastructure, with specific IP addresses not used by other domains.

But today, many organizations outsource their email infrastructure to Gmail and Outlook. As a result, thousands of domains have delegated the right to send email on their behalf to the same third party. While these third-party providers validate that their users only send email on behalf of domains that they operate, this protection can be bypassed by email forwarding.

For example, state.gov, the email domain for the Department of State, allows Outlook to send emails on their behalf. This means emails claiming to be from state.gov would be considered legitimate if they came from Outlook’s email servers.

As a result, an attacker can create a spoofed email–an email with a fake identity—pretending, for example, to come from the Department of State—and then forward it through their personal Outlook account. Once they do this, the spoofed email will now be treated as legitimate by the recipient, as it is coming from an Outlook email server.

Versions of this flaw also exist for five other email providers, including iCloud. The researchers also discovered other smaller issues that impact users of Gmail and Zohomail—a popular email provider in India.

Researchers reported the issue to Microsoft, Apple and Google but to their knowledge, it has not been fully fixed.

“That is not surprising since doing so would require a major effort, including dismantling and repairing four decades worth of legacy systems,” said Alex Liu, the paper’s first author and a Ph.D. student in the Jacobs School Department of Computer Science and Engineering at UC San Diego. “While there are certain short-term mitigations that will significantly reduce the exposure to the attacks we have described here, ultimately email needs to stand on a more solid security footing if it is to effectively resist spoofing attacks going forward.”

The team presented their findings at the 8th IEEE European Symposium on Privacy and Security, July 3 to 7, 2023, in Delft, where the work won best paper.

Different attacks

Researchers developed four different types of attacks using forwarding.

For the first three, they assumed that an adversary controls both the accounts that send and forward emails. The attacker also needs to have a server capable of sending spoofed email messages and an account with a third party provider that allows open forwarding.

The attacker starts by creating a personal account for forwarding and then adds the spoofed address to the accounts’ white list—a list of domains that won’t be blocked even if they don’t meet security standards. The attacker configures their account to forward all email to the desired target. The attacker then forges an email to look like it originated from state.gov and sends the email to their personal Outlook account. Then all the attacker has to do is forward the spoofed email to their target.

More than 12% of the Alexa 100K most popular email domains—the most popular domains on the Internet—are vulnerable to this attack. These include a large number of news organizations, such as the Washington Post, the Los Angeles Times and the Associated Press, as well as domain registrars like GoDaddy, financial services, such as Mastercard and Docusign and large law firms.

Scammers can abuse security flaws in email forwarding to impersonate high-profile domains
Example of a spoofed email attack exploiting open forwarding and relaxed validation policies for forwarded email from well-known providers. Credit: University of California San Diego

In addition, 32% of .gov domains are vulnerable, including the majority of US cabinet agencies, a range of security agencies, and agencies working in the public health domain, such as CDC. At the state and local level, virtually all primary state government domains are vulnerable and more than 40% of all .gov domains are used by cities.

In a second version of this attack, an attacker creates a personal Outlook account to forward spoofed email messages to Gmail. In this scenario, the attacker takes on the identity of a domain that is also served by Outlook, then sends the spoofed message from their own malicious server to their personal Outlook account, which in turn forwards it to a series of Gmail accounts.

Roughly 1.9 billion users worldwide are vulnerable to this attack.

Researchers also found variations of this attack that work for four popular mailing list services: Google groups, mailman, listserv and Gaggle.

Potential solutions

Researchers disclosed all vulnerabilities and attacks to providers. Zoho patched their issue and awarded the team a bug bounty. Microsoft also awarded a bug bounty and confirmed the vulnerabilities. Mailing list service Gaggle said it would change protocols to resolve the issue. Gmail also fixed the issues the team reported and iCloud is investigating.

But to truly get to the root of the issue, researchers recommend disabling open forwarding, a process that allows users to configure their account to forward messages to any designated email address without any verification by the destination address. This process is in place for Gmail and Outlook. In addition, providers such as Gmail and Outlook implicity trust high-profile email services, delivering messages forwarded by these emails regardless.

Providers should also do away with the assumption that emails coming from another major provider are legitimate, a process called relaxed validation policies.

In addition, researchers recommend that mailing lists request confirmation from the true sender address before delivering email.

“A more fundamental approach would be to standardize various aspects of forwarding,” the researchers write. “However, making such changes would require system-wide cooperation and will likely encounter many operational issues.”

Methods

For each service, researchers created multiple test accounts and used them to forward email to recipient accounts they controlled. They then analyzed the resulting email headers to better understand which forwarding protocol the service used. They tested their attacks on 14 email providers, which are used by 46% of the most popular internet domains and government domains.

They also created mailing lists under existing services provided by UC San Diego, and by mailing list service Gaggle.

Researchers only sent spoofed email messages to accounts they created themselves. They first tested each attack by spoofing domains they created and controlled. Once they verified that the attacks worked, they ran a small set of experiments that spoofed emails from real domains. Still, the spoofed emails were only sent to test accounts the researchers created.

“One fundamental issue is that email security protocols are distributed, optional and independently configured components,” the researchers write. “This creates a large and complex attack surface with many possible interactions that cannot be easily anticipated or administrated by any single party.”

The research is published on the arXiv preprint server.

More information:
Enze Liu et al, Forward Pass: On the Security Implications of Email Forwarding Mechanism and Policy, arXiv (2023). DOI: 10.48550/arxiv.2302.07287

Journal information:
arXiv


Citation:
Scammers can abuse security flaws in email forwarding to impersonate high-profile domains (2023, September 5)
retrieved 29 June 2024
from https://techxplore.com/news/2023-09-scammers-abuse-flaws-email-forwarding.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

How can we make good decisions by observing others? A videogame and computational model have the answer

0
How can we make good decisions by observing others? A videogame and computational model have the answer


How can we make good decisions by observing others? A videogame and computational model have the answer
Groups of players moved through a virtual 3D environment to search for hidden treasures. Credit: SCIoI/Dominik Deffner

How can disaster response teams benefit from understanding how people most efficiently pick strawberries together, or how they choose the perfect ice cream shop with friends?

All these scenarios are based on the very fundamental question of how and when human groups manage to adapt collectively to different circumstances. Two recent studies on collective dynamics by the Cluster of Excellence Science of Intelligence (SCIoI) in Berlin, Germany, lay the groundwork to promote better coordinated operations while showcasing the potential of the Cluster’s analytic-synthetic loop approach: an interconnection of a human-focused (analytic) study with a novel computer simulation (synthetic).

By understanding how individual decisions impact group performance, we can possibly enhance emergency services and everyday teamwork, and further develop effective decentralized robotic systems that could benefit society in multiple ways (think robots that explore potentially dangerous places such as a crumbling building).

How groups of people move and make collective decisions (analytic side)

Through a naturalistic immersive-reality experiment, Science of Intelligence researchers have presented new findings on the dynamics of human collective behavior. The study “Collective incentives reduce over-exploitation of social information in unconstrained human groups,” published in Nature Communications, explores how individual decisions shape collective outcomes in realistic group settings.

In the experiment, groups of participants freely moved through a 3D virtual environment similar to a video game, searching for hidden treasures. This resembled scenarios of hunting and gathering, extinguishing wildfires, or searching for survivors together.

The researchers varied how resources were distributed and how participants were incentivized. Individuals often benefited from staying close to others and taking advantage of their discoveries. However, on the group level, this caused poor group performance.

“It’s a bit like copying homework: You are benefitting yourself but not contributing to group performance in the long run,” said Dominik Deffner. “But it also turned out that rewards on the group level, similar to bonuses for team achievements, reduced this copying behavior and thereby improved group performance.”

To extract individual decisions from naturalistic societal interactions, the researchers developed a computational model helping them to understand key decision-making processes. This model inferred sequences of decisions from visual and movement data and showed that group rewards made people less likely to follow social information, encouraging them to become more selective over time.

How can we make good decisions by observing others? A videogame and computational model have the answer
100 virtual robots searching together for resources, being less cooperative on the left and more cooperative on the right. Credit: SCIoI/David Mezey

The study also looked at how groups moved and acted over time and space, finding a balance between exploring new areas and using known resources at different times. These findings are important for improving group strategies in many areas, like solving problems in businesses or improving search and rescue operations.

How visual perception and embodiment shapes collective decisions (synthetic side)

In a complementary study, called “Visual social information use in collective foraging” and published in PLOS Computational Biology, researchers introduced a new computational model that explores how individual decisions shape collective behavior.

The model applies to any realistic situation where groups of people, animals, or robots are searching for rewards together. This computational model addresses two main questions: how do individuals make decisions according to visible information around them? And how do they move in a physical space at the same time?

In this study, a simulated swarm of robots searches for resources in a virtual playground very similar to the one by Deffner described above. The resources are in patches and when depleted, they reappear in new spots. The virtual robots can choose between exploring the environment to find new resource patches, following other robots consuming resources, or staying and consuming resources until they’re gone.

The findings show how simple decisions, for example where to go next, can lead to complex group behavior.

“The environment plays an important role in how groups work efficiently together” said David Mezey. “When resources are concentrated, working closely together and relying on shared information is the most efficient solution. However, when the resources are spread out it’s better for individuals or smaller subgroups to work independently. This explains some everyday group behaviors that many of us may be familiar with.

“Imagine a group of firefighters tasked with putting out a large fire in the forest. If the flames are concentrated in one well-defined area, the best strategy would be for all of them to work together in that specific location. But, if the fire has already spread across patches, it is more effective for the firefighters to split into smaller subgroups to find and tackle the distributed patches independently.”

The study also highlights how physical and visual limitations affect group performance. The authors included real-world limitations in their computer simulations, for example, individuals bumping into each other when too close, or blocking each other’s views.

How can we make good decisions by observing others? A videogame and computational model have the answer
Theoretical framework. Credit: Mezey et. al (2024)

They discovered that these limitations can fundamentally change collective behavior and, interestingly, in some cases, even improve group performance. For example, virtual robots with restricted vision focus only on nearby individuals, improving their search strategy. Imagine strawberry picking with friends: even if a friend finds some fruits far away from you, you might want to stay in your area to avoid reaching an already empty patch.

These limitations had similar effects on virtual robots, and this study shows why it’s so important to think about such limitations when studying group behavior.

Analyzing, synthesizing and looping back again

We’ve understood certain animal collective behaviors especially in fish, birds and sheep through simple interaction rules often based on physical principles. However, to understand collective behavior in humans, we need to understand all the individual decisions that people make and the cognitive processes that produce them.

In the two studies, the researchers link individual cognition to collective outcomes in realistic environments and thus explain complex group outcomes based on individual decisions. In other words, insights from the human-focused study (analytic side) are used to create computational models (synthetic side) that can be applied to better understand phenomena such as collective behavior and social learning (loop).

This provides a fruitful path forward, hopefully making it possible to understand, predict, and guide collective outcomes in crucial areas.

Together, these studies offer a comprehensive understanding of the mechanisms linking individual cognition to collective outcomes in collective foraging tasks, providing new perspectives on optimizing collective performance across various fields. The implications for decentralized robotic systems are particularly promising.

Understanding realistic constraints on group performance might reshape how we develop efficient swarm robotic applications in the future.

More information:
Dominik Deffner et al, Collective incentives reduce over-exploitation of social information in unconstrained human groups, Nature Communications (2024). DOI: 10.1038/s41467-024-47010-3

David Mezey et al, Visual social information use in collective foraging, PLOS Computational Biology (2024). DOI: 10.1371/journal.pcbi.1012087

Provided by
Technische Universität Berlin – Science of Intelligence

Citation:
How can we make good decisions by observing others? A videogame and computational model have the answer (2024, June 4)
retrieved 29 June 2024
from https://phys.org/news/2024-06-good-decisions-videogame.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Researchers issue warning over Chrome extensions that access private data

0
Researchers issue warning over Chrome extensions that access private data


 google Chrome
Credit: Pixabay/CC0 Public Domain

Google Chrome browser extensions expose users to hackers who can easily tap into their private data, including social security numbers, passwords and banking information, according to researchers at the University of Wisconsin-Madison (UW-M).

The researchers further uncovered vulnerabilities involving passwords that are stored in plain text within HTML source code on web sites of some of the world’s largest corporate giants, including Google, Amazon, Citibank, Capital One and the Internal Revenue Service.

The problem stems from the manner in which extensions access internal web page code.

Google offers thousands of extensions that users install to handle calendar events, password management, ad blocking, email access, bookmark storage, translation and search activities.

While such extensions help expand upon browser capabilities and make browsing easier, they also expose stored data to intruders, said Asmit Nayak, a computer science graduate student at UW-M.

“In the absence of any protective measures, as seen on websites like IRS.gov, Capital One, USENIX, Google, and Amazon, sensitive data such as SSNs and credit card information are immediately accessible to all extensions running on the page,” Nayak said in a report published on the pre-print server arXiv on Aug. 30. “This presents a significant security risk, as private data is left vulnerable.”

The threat remains despite protective measures introduced by Google this year that have been embraced by most browsers. The protocol placed stricter limits on what kinds of information extensions can access.

But there remains no protective layer between web pages and browser extensions, so bad actors can still evade detection.

The researchers described “the alarming discovery” of passwords stored in plain text HTML web page source files.

“A significant percentage of extensions possess the necessary permissions to exploit these vulnerabilities,” Nayak said, adding that he and his two colleagues identified 190 extensions “that directly access password fields.”

To test their suspicions about vulnerabilities, the researchers uploaded an extension that could exploit extension weakness and steal plain-text passwords from HTML pages of web sites. It contained no malicious code, so it passed security screening at Google’s Chrome Web Store.

The ease with which the researchers uploaded a potentially harmful extension “underscores the urgent need for more robust security measures,” Nayak said.

The researchers disabled the extension after they established it could bypass security measures and read restricted data.

Nayak said the extension faults stemmed from two key procedural violations in coding: least privilege and complete mediation.

Least privilege refers to the principle that users and systems should be granted only the lowest level of access privilege required to complete tasks. Any unnecessary privilege should be barred. Default access states should be on “deny” and not “allow.”

Complete mediation refers to evaluation of each and every access request, with no deviations or exceptions.

The researchers proposed two means to address the problem. The first is a JavaScript add-on for all extensions that provide solid cover for sensitive input fields.

The second proposal is to add a browser feature that alerts users when an attempt is made to access sensitive data.

The report, “Exposing and Addressing Security Vulnerabilities in Browser Text Input Fields,” raised particular alarm over vulnerabilities at two major web sites.

“Major online marketplaces such as Google and Amazon do not implement any protections for credit card input fields,” the report stated. “In these cases, credit card details, including the Security Code and zip code, are visible in plain text on the webpage. This presents a significant security risk, as any malicious extension could potentially access and steal this sensitive information.”

The report continued, “The lack of protection on these websites is particularly concerning, given their scale and the volume of transactions they handle daily.”

In repose to the report, an Amazon spokesperson said, “We encourage browser and extension developers to use security best practices to further protect customers using their services.”

A Google spokesperson said they are looking into the matter.

More information:
Asmit Nayak et al, Exposing and Addressing Security Vulnerabilities in Browser Text Input Fields, arXiv (2023). DOI: 10.48550/arxiv.2308.16321

Journal information:
arXiv


© 2023 Science X Network

Citation:
Researchers issue warning over Chrome extensions that access private data (2023, September 6)
retrieved 29 June 2024
from https://techxplore.com/news/2023-09-issue-chrome-extensions-access-private.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link