German airline giant Lufthansa said Tuesday it would add an environmental charge of up to 72 euros ($77) to fares in Europe to cover the cost of increasing EU climate regulations.
The extra cost will be added to all flights sold and operated by the group departing from EU countries as well as Britain, Norway and Switzerland, it said in a statement.
It will apply to flights from January next year and, depending on the route and fare, will vary from one to 72 euros.
“The airline group will not be able to bear the successively increasing additional costs resulting from regulatory requirements in the coming years on its own,” said Lufthansa.
The group—whose airlines include Lufthansa, Eurowings, Austrian, Swiss and Brussels Airlines—said it is facing extra costs from EU regulations related to sustainable aviation fuel (SAF).
The EU legislation requires airlines to gradually increase use of the fuel on routes departing EU airports.
Carriers will need to include two percent of SAF in their fuel mix from next year, rising to six percent in 2030 and then soaring to 70 percent from 2050.
The aviation sector is among the toughest to decarbonize and SAF—a biofuel that produces lower carbon emissions than traditional jet fuel —is seen as a crucial ingredient to hitting emissions targets but is currently more expensive to produce.
In March, Airlines for Europe, which represents the continent’s largest airline groups including Lufthansa, complained that production of the fuel in Europe is minimal and lags far behind projects launched in the United States.
Lufthansa said it also faces extra costs from changes to the EU’s emissions trading system, and other regulatory measures.
The group aims to halve its net carbon emissions by 2030 compared to 2019, and to go carbon neutral by 2050.
After having to be bailed out by the German government during the coronavirus pandemic, Lufthansa racked up healthy profits in 2022 and 2023 as travel demand roared back.
But it was hard hit by a series of strikes at the start of this year, reporting a hefty first-quarter loss.
Citation:
Lufthansa to add environmental charge to fares (2024, June 25)
retrieved 25 June 2024
from https://techxplore.com/news/2024-06-lufthansa-environmental-fares.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Wikipedia is the largest platform for open and freely accessible knowledge online yet, in a new study, EPFL researchers have found that around 15% of the content is effectively invisible to readers browsing within Wikipedia. They have developed a new tool to help overcome this. The work is published on the arXiv preprint server.
With 60 million articles in more than 300 language versions, Wikipedia’s available content grows continuously at a rate of around 200 thousand new articles each month. Readers often discover new knowledge and dig deeper into a subject by clicking hyperlinks that connect one article to the next. But what about Wikipedia articles that no other articles link to?
These are commonly referred to as ‘orphan’ articles and to better understand this phenomenon EPFL researchers from the Data Science Laboratory (DLAB) in the School of Computer and Communication Sciences, in collaboration with the Research Team at the Wikimedia Foundation, conducted the first systematic study of orphan articles across all 319 different language versions of Wikipedia that existed at the time the study was conducted.
“Wikipedia is a network just like roads, the internet, chemical compounds, or genes, and any network has a basic concept of navigability so you can go from one place to another. Information networks are organized in particular hierarchies and we were curious to understand articles that were not reached by anyone. That’s how we started to look into orphan articles,” explained Akhil Arora, a Ph.D. researcher in DLAB and lead author of the study “Orphan Articles: The Dark Matter of Wikipedia.”
The researchers found that almost 9 million articles on Wikipedia across all languages—around 15%—were orphans, effectively invisible to readers browsing within Wikipedia, existing across nearly all topic areas on the platform. In general, pageviews received by non-orphan articles are twice as many as the pageviews of orphan articles. Beyond simple correlations, the researchers also established a cause-and-effect relationship between the addition of in-links to orphan articles and an increase in their pageviews.
The lack of visibility of orphan articles comes down to the way users search and view pages on Wikipedia. The first is via a search engine, where a user is pointed to a particular Wikipedia page; the second is while using Wikipedia as an encyclopedia and clicking through from one article to another and the third is a combination of both.
In all these scenarios, an editor will not only need to add links in the outgoing direction from the article they are editing but will need to know all the relevant Wikipedia articles that could potentially link inwards, and this is a difficult prospect.
“An editor is editing something they know a lot about so they are able to add outward links to other articles,” said Arora. “Reversing directionality introduces so many difficulties because they may not be an expert on other topics and articles; sometimes these relationships are not symmetrical and the universe is the entirety of Wikipedia.”
The research found that there are large discrepancies across languages. In more than 100 languages, the percentage of orphan articles is more than 30%, with a particularly high figure for Egyptian Arabic (78%) and Vietnamese (50%). Both are among the 20 largest Wikipedia language versions. This points to the challenge of a lack of editor capacity in some languages and demonstrates the need to improve existing tools, such as FindLink, that support editors in this task.
One interesting finding of the study is that an orphan article in one language is not always an orphan in other languages and this led the researchers to develop a new approach for identifying articles from which to link to orphans via link translation.
“If the same article is not an orphan in another language, it means the editors in that community were able to find other articles that could link to this article. So we simply just transferred the link from other languages to the language in which the article was an orphan. We found this approach was able to suggest links for more than 63% of the orphan articles,” said Arora.
The EPFL team is continuing to collaborate with researchers at the Wikimedia Foundation on ways this approach could be made available as a tool (see the initial prototype) to improve the experience of readers on Wikipedia. It is also using AI to help this effort on two fronts.
First, the researchers are working on graph neural networks to organize link recommendations that will serve as a basis for the tool. Second, similar to a heat map, they are developing an additional tool that can guide editors as to where in a page text they should consider adding new concepts that will then use generative AI to suggest some starting text.
Importantly, volunteer editors improve, edit, and audit the work done by AI. The approach to AI on Wikipedia has always been through “closed loop” systems, in which humans are in the loop.
“The editor community is doing its service to the world but there are not enough of them, particularly in smaller languages. One of our goals is to better support editors because it can be a daunting task to write and maintain articles. Wikipedia is an incredible open access service and this is why the tools that we’re building are so helpful to editors doing this valuable work,” concluded Arora.
More information:
Akhil Arora et al, Orphan Articles: The Dark Matter of Wikipedia, arXiv (2023). DOI: 10.48550/arxiv.2306.03940
Citation:
Orphan articles: The ‘dark matter’ of Wikipedia (2024, May 17)
retrieved 25 June 2024
from https://techxplore.com/news/2024-05-orphan-articles-dark-wikipedia.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
A team at Los Alamos National Laboratory has used machine learning—an application of artificial intelligence—to detect the hidden signals that precede an earthquake. The findings at the Kīlauea volcano in Hawaii are part of a years-long research effort pioneered at Los Alamos, and this latest study represents the first time scientists were able to detect these warning signals in a stick-slip fault, the kind that can generate massive destruction.
The paper is published in the journal Geophysical Research Letters.
“We wanted to see if we could pull out signals from the noise and identify where in the loading cycle the system was in terms of nearing a major slip, which causes earthquakes,” said Christopher Johnson, a seismologist at Los Alamos and the team’s lead researcher. “This is the first time we’ve been able to apply this method to an earthquake of this type and of this magnitude.”
The team used data recorded between June 1, 2018, and August 2, 2018, by the U.S. Geological Survey’s Hawaiian Volcano Observatory. In this time, the volcano experienced more than 50 quakes of varying magnitudes. Researchers focused on 30-second windows of seismic data, and their model identified something akin to a fingerprint, a hidden signal, that tracked the loading cycle of each event. On average, that hidden signal appeared continuous prior to a detectable large ground movement.
Combined with previous tests, the results suggest that some earthquake faults share similar physics, meaning this method could be used to assess earthquake hazards across the globe.
Patterns in the noise
The research builds on previous work conducted by Los Alamos on faults in California and the Pacific Northwest, where machine learning was able to detect these precursory signals.
As tectonic plates press against each other, they create weak tremors in the ground, called continuous acoustic or seismic emissions. These signals appear like waveforms when recorded but were previously believed to be noise—data without information describing the state of the fault. Instead, Los Alamos researchers have found that continuous acoustic emission waveforms are, in fact, rich with data and can be used to infer physical properties of a fault, such as displacement, friction, and thickness.
Most importantly, Los Alamos scientists have found highly predictable patterns in the signals, a sort of timeline to failure.
“When we look at these continuous signals, we can pull out information that tells us where the fault is in its loading cycle,” Johnson said. “We’re looking at how the noise evolves and that gives us details about its current state and where it is in the slip cycle.”
From slow-slip to stick-slip
The team’s research was the first time they successfully applied the approach to seismogenic faults, the layer in which earthquakes originate. In this case, that was a sequence of highly active, magnitude-5 stick-slip events at the KÄ«lauea volcano, which experienced a months-long seismic event that led the caldera to sink 1,600 feet.
During that time, a global navigation satellite system measured millimeter-scale displacement of the ground. The machine learning model then analyzed this data, processed the seismic signals, and successfully estimated the ground displacement and time to the next fault failure.
Previously, Los Alamos researchers had applied similar machine learning models to slow-slip events, which cause the ground to rattle subtly for days, months, or even years before a seismic event. Such large data sets were helpful to train the machine learning models. But the most destructive earthquakes are caused by stick-slip faults, like that found at the KÄ«lauea volcano, which can generate much stronger ground motions more quickly, and have until now eluded prediction.
More information:
Christopher W. Johnson et al, Seismic Features Predict Ground Motions During Repeating Caldera Collapse Sequence, Geophysical Research Letters (2024). DOI: 10.1029/2024GL108288
Citation:
Modeling software reveals patterns in continuous seismic waveforms during series of stick-slip, magnitude-5 earthquakes (2024, June 25)
retrieved 25 June 2024
from https://phys.org/news/2024-06-software-reveals-patterns-seismic-waveforms.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
In a significant development for the miniaturization of electronic devices, a study published in Engineering has reported the creation of a microelectromechanical systems (MEMS) clock that offers improved precision and stability. The paper is titled “MEMS Huygens Clock Based on Synchronized Micromechanical Resonators.”
The clock, which utilizes the synchronization principle discovered by Christiaan Huygens, consists of two synchronized MEMS oscillators and a frequency compensation system.
The research details how the MEMS Huygens clock enhances short-time stability, with the Allan deviation—a measure of the clock’s accuracy over time—improving by a factor of 3.73 from 19.3 ppb to 5.17 ppb at 1 second. The clock’s long-term stability is also significantly boosted, with the Allan deviation improving by 1.6343 × 105 times to 30.9 ppt at 6,000 seconds.
To achieve these results, the researchers developed a frequency compensation system that counteracts the MEMS oscillator’s temperature-frequency characteristics, thereby maintaining the clock’s accuracy by controlling the resonator current. This innovation led to a highly efficient method of compensating for frequency shifts in both oscillators simultaneously, consuming just 2.85 mW∙°C−1.
The study’s comprehensive solution scheme paves the way for high-precision MEMS oscillators and expands the application scope of synchronization in MEMS technology. With the continuous shrinking of electronic components, this breakthrough offers promising prospects for industries relying on precise timekeeping, such as telecommunications, navigation, and data processing.
As the demand for more accurate and reliable timing sources grows, the MEMS Huygens clock presented in this study stands to make a substantial impact on the future of microelectromechanical systems and their integration into everyday technologies.
More information:
Xueyong Wei et al, MEMS Huygens Clock Based on Synchronized Micromechanical Resonators, Engineering (2024). DOI: 10.1016/j.eng.2023.12.013
Provided by
Engineering
Citation:
MEMS Huygens clock improves timekeeping precision and stability (2024, June 12)
retrieved 25 June 2024
from https://techxplore.com/news/2024-06-mems-huygens-clock-timekeeping-precision.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.