Friday, November 22, 2024
Home Blog Page 1111

Europe’s fight with Big Tech

0
Europe's fight with Big Tech


Tech giants have been targeted by the EU for a number of allegedly unfair practices
Tech giants have been targeted by the EU for a number of allegedly unfair practices.

The European Union warned Apple on Monday that its App Store is breaching its digital competition rules, placing the iPhone maker at risk of billions of dollars in fines.

It is the latest in a years-long battle between Brussels and giant tech firms, covering subjects from data privacy to disinformation.

Stifling competition

Brussels has doled out over 10 billion euros in fines to tech firms for abusing their dominant market positions.

The latest threat for Apple comes three months after the bloc hit the California firm with a 1.8-billion-euro ($1.9 billion) penalty for preventing European users from accessing information about cheaper music streaming services.

Among big tech firms, only Google has faced a bigger single antitrust fine—more than four billion euros in 2018 for using its Android mobile operating system to promote its search engine.

Google has also incurred billion-plus fines for abusing its power in the online shopping and advertising sectors.

The European Commission, the EU’s executive, recommended last year that Google should sell parts of its business and could face a fine of up to 10 percent of its global revenue if it fails to comply.

Privacy

Ireland issues the stiffest fines on data privacy as the laws are enforced by local regulators and Dublin hosts the European offices of several big tech firms.

The Irish regulator handed TikTok a 345-million-euro penalty for mishandling children’s data last September just months after it hit Meta with a record fine of 1.2 billion euros for illegally transferring personal data between Europe and the United States.

Luxembourg had previously held the record for data fines after it slapped Amazon with a 746-million-euro penalty in 2021.

Taxation

The EU has had little success in getting tech companies to pay more taxes in Europe, where they are accused of funneling profits into low-tax economies like Ireland and Luxembourg.

In one of the most notorious cases, the European Commission in 2016 ordered Apple to pay Ireland more than a decade in back taxes—13 billion euros—after ruling a sweetheart deal with the government was illegal.

But EU judges overturned the decision saying there was no evidence the company had broken the rules, a decision the commission has been trying to reverse ever since.

The commission is also fighting to reverse another court loss, after judges overruled its order for Amazon to repay 250 million euros in back taxes to Luxembourg.

Disinformation, hate speech

Web platforms have long faced accusations of failing to combat hate speech, disinformation and piracy.

The EU passed the Digital Services Act last year, which is designed to force companies to tackle these issues or face fines of up to six percent of their global turnover.

Already the bloc has begun to show how the DSA might be applied, opening probes on Facebook and Instagram for failing to tackle election-related disinformation.

The bloc has also warned Microsoft that the falsehoods generated by its AI search could fall foul of the DSA.

Paying for news

Google and other online platforms have also been accused of making billions from news without sharing the revenue with those who gather it.

To tackle this, the EU created a form of copyright called “neighboring rights” that allows print media to demand compensation for using their content.

France has been a test case for the rules and after initial resistance Google and Facebook both agreed to pay some French media for articles shown in web searches.

© 2024 AFP

Citation:
Dominance, data, disinformation: Europe’s fight with Big Tech (2024, June 24)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-dominance-disinformation-europe-big-tech.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Frontier supercomputer sets new standard in molecular simulation

0
Frontier supercomputer sets new standard in molecular simulation


Breaking benchmarks: Frontier supercomputer sets new standard in molecular simulation
The team used Frontier with the Large-scale Atomic and Molecular Massively Parallel Simulator software module to simulate a system of room-temperature water molecules at the atomic level as they gradually increased the number of atoms. Credit: ORNL, U.S. Dept. of Energy

When scientists pushed the world’s fastest supercomputer to its limits, they found those limits stretched beyond even their biggest expectations.

The Frontier supercomputer at the Department of Energy’s Oak Ridge National Laboratory set a new ceiling in performance when it debuted in 2022 as the first exascale system in history—capable of more than 1 quintillion calculations per second. Now researchers are learning just what heights of scientific discovery Frontier’s computational power can help them achieve.

In the latest milestone, a team of engineers and scientists used Frontier to simulate a system of nearly half a trillion atoms—the largest system ever modeled and more than 400 times the size of the closest competition—in a potential gateway to new insights across the scientific spectrum.

“It’s like test-driving a car with a speedometer that registers 120 miles per hour, but you press the gas pedal and find out it goes past 200,” said Nick Hagerty, a high-performance computing engineer for ORNL’s Oak Ridge Leadership Computing Facility, which houses Frontier.

“Nobody runs simulations at a scale this size because nobody’s ever tried. We didn’t know we could go this big.”

The results hold promise for scientific studies at a scale and level of detail not yet seen.

“Nobody on Earth has done anything remotely close to this before,” said Dilip Asthagiri, an OLCF senior computational biomedical scientist who helped design the test. “This discovery brings us closer to simulating a stripped-down version of a biological cell, the so-called minimal cell that has the essential components to enable basic life processes.”

Hagerty and his team sought to max out Frontier to set criteria for the supercomputer’s successor machine, still in development. Their mission: Push Frontier as far as it could go and see where it stopped.

The team used Frontier with the Large-scale Atomic and Molecular Massively Parallel Simulator software module, or LAMMPS, to simulate a system of room-temperature water molecules at the atomic level as they gradually increased the number of atoms.

“Water is a great test case for a machine like Frontier because any researcher studying a biological system at the atomic level will likely need to simulate water,” Hagerty said. “We wanted to see how big of a system Frontier could really handle and what limitations are encountered at this scale.

“As one of the first benchmarking efforts to use more than a billion atoms with long-range interactions, we would periodically find bugs in the LAMMPS source code. We worked with the LAMMPS developers, who were highly engaged and responsive, to resolve those bugs, and this was critical to our scaling success.”

Frontier tackles complex problems via parallelism, which means the supercomputer breaks up the computational workload across its 9,408 nodes, each a self-contained computer capable of around 200 trillion calculations per second. With each increase in problem size, the simulation demanded more memory and processing power. Frontier never blinked.

“We’re not talking about a large simulation just in terms of physical size,” Asthagiri said. “After all, a billion water molecules would fit in a cube with edges smaller than a micrometer (a millionth of a meter). We’re talking large in terms of the complexity and detail.

“These millions and eventually billions and hundreds of billions of atoms interact with every other atom, no matter how far away. These long-range interactions increase significantly with every molecule that’s added. This is the first simulation of this kind at this size.”

The simulation ultimately grew to more than 155 billion water molecules—a total of 466 billion atoms—across more than 9,200 of Frontier’s nodes. The supercomputer kept crunching the numbers, even with 95% of its memory full. The team stopped there.

“We could have gone even higher,” Hagerty said. “The next level would have been 600 billion atoms, and that would have consumed 99% of Frontier’s memory. We stopped because we were already far beyond a size anyone’s ever reached to conduct any meaningful science. But now we know it can be done.”

That capacity for detail could offer the chance to conduct vastly more complex studies than some scientists had hoped for on Frontier.

“This changes the game,” Asthagiri said. “Now we have a way to model these complex systems and their long-range interactions at extremely large sizes and have a hope of seeing emergent phenomena.

“For example, with this computing power, we could simulate sub-cellular components and eventually the minimal cell in atomic detail. From such explorations, we could learn about the spatial and temporal behavior of these cell structures that are basic to human, animal and plant life as we know it. This kind of opportunity is what an exascale machine like Frontier is for.”

Citation:
Breaking benchmarks: Frontier supercomputer sets new standard in molecular simulation (2024, June 20)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-benchmarks-frontier-supercomputer-standard-molecular.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

A new method to achieve smooth gait transitions in hexapod robots

0
A new method to achieve smooth gait transitions in hexapod robots


A new method to achieve smooth gait transitions in hexapod robots
The real Hexapod robot used to validate the team’s control method. Credit: Heliyon (2024). DOI: 10.1016/j.heliyon.2024.e31847

Robots that can navigate various terrains both rapidly and efficiently could be highly advantageous, as they could successfully complete complex missions in challenging environments. For instance, these robots could help to monitor complex natural environments, such as forests, or could search for survivors after natural disasters.

One of the most common types of robots designed to navigate varying terrains are legged robots, whose bodies are often inspired by the body structure of animals. To move swiftly in varying terrains, legged robots should be able to adapt their movements and gait-styles based on detected changes in their environmental conditions.

Researchers at the Higher Institute for Applied Science and Technology in Damascus, Syria, recently developed a new method to facilitate a smooth transition between the different gaits of a hexapod robot.

Their proposed gait control technique, introduced in a paper published in Heliyon, is based on so-called central pattern generators (CPGs), computational approaches that mimic biological CPGs. These are the neural networks underpinning many rhythmic movements performed by humans and animals (i.e., walking, swimming, jogging, etc.).

“Our recent publication is a foundational component of a larger project that aims to revolutionize the locomotion control of hexapod robots,” Kifah Helal, corresponding author of the paper, told Tech Xplore.

“While machine learning techniques have not yet been integrated, the architecture we’ve designed lays the groundwork for such advanced applications. Our methodology is crafted with future machine learning integration in mind, ensuring that when implemented, it will significantly enhance malfunction compensation.”

Helal and his colleagues first set out to design and simulate a six-legged (hexapod) robot. This simulated robotic platform was then used to test their proposed control architecture based on CPGs.

A new method to achieve smooth gait transitions in hexapod robots
Gait transitions between different gaits while changing the angular velocity of oscillators from (2.5–7.5) rad.s-1. The term Di represents how much the legi is far from the synchronization so the figure shows how it affects the instantaneous frequency of oscillator to synchronize the network. Credit: Heliyon (2024). DOI: 10.1016/j.heliyon.2024.e31847

“Our control method leverages the principles of CPGs where each leg of the hexapod robot is governed by a distinct rhythmic signal,” Helal explained. “The essence of different gaits lies in the phase differences between these signals. Our paper’s core contribution is the novel interaction design among the oscillators, ensuring seamless gait transitions.”

Helal and his colleagues also developed a workspace trajectory generator, a computational tool that translates the outputs of oscillators integrated in a hexapod robot into trajectories for its feet, ensuring that these trajectories remain effective during transitions. In initial tests, their proposed control architecture was found to enable stable, efficient and swift changes in gait in both a simulated and real hexapod robot.

“The most striking outcomes of our research are the harmonious blend of transition smoothness and speed,” Helal said. “Essentially, it’s the fusion of fluidity and quickness that sets our work apart from other previous efforts. We also validated a mapping function that ensures the robot’s foot trajectory remains effective throughout these transitions.”

The new architecture introduced by this team of researchers could soon be tested in further experiments and applied to other legged robots, to allow them to swiftly adapt to environmental changes while retaining their agility.

In their next studies, Helal and his colleagues plan to further improve their method, to tackle potential malfunctions and further boost its performance when robots encounter particularly challenging terrains.

“Looking ahead, we plan to delve deeper into machine learning to further refine our robot’s environmental adaptability,” Helal added. “We’re particularly excited about exploring malfunction compensation and integrating pain sensing as feedback mechanisms.

“These advancements will not only improve the robot‘s interaction with its surroundings but also pave the way for more autonomous and resilient robotic systems.”

More information:
Kifah Helal et al, Workspace trajectory generation with smooth gait transition using CPG-based locomotion control for hexapod robot, Heliyon (2024). DOI: 10.1016/j.heliyon.2024.e31847

© 2024 Science X Network

Citation:
A new method to achieve smooth gait transitions in hexapod robots (2024, June 23)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-method-smooth-gait-transitions-hexapod.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

New open-source platform allows users to evaluate performance of AI-powered chatbots

0
New open-source platform allows users to evaluate performance of AI-powered chatbots


New open-source platform allows users to evaluate performance of AI-powered chatbots
(A) Contrasting typical static evaluation (Top) with interactive evaluation (Bottom), wherein a human iteratively queries a model and rates the quality of responses. (B) Example subset of the chat interface from CheckMate where users interact with an LLM. The participant can type their query (Lower Left), which is compiled in LaTeX (Lower Right). When ready, the participant can press “Interact” and have their query routed to the model. Credit: Proceedings of the National Academy of Sciences (2024). DOI: 10.1073/pnas.2318124121

A team of computer scientists, engineers, mathematicians and cognitive scientists, led by the University of Cambridge, have developed an open-source evaluation platform called CheckMate, which allows human users to interact with and evaluate the performance of large language models (LLMs).

The researchers tested CheckMate in an experiment where human participants used three LLMs—InstructGPT, ChatGPT and GPT-4—as assistants for solving undergraduate-level mathematics problems.

The team studied how well LLMs can assist participants in solving problems. Despite a generally positive correlation between a chatbot’s correctness and perceived helpfulness, the researchers also found instances where the LLMs were incorrect, but still useful for the participants. However, certain incorrect LLM outputs were thought to be correct by participants. This was most notable in LLMs optimized for chat.

The researchers suggest models that communicate uncertainty, respond well to user corrections, and can provide a concise rationale for their recommendations, make better assistants. Human users of LLMs should verify their outputs carefully, given their current shortcomings.

The results, reported in the Proceedings of the National Academy of Sciences, could be useful in both informing AI literacy training, and help developers improve LLMs for a wider range of uses.

While LLMs are becoming increasingly powerful, they can also make mistakes and provide incorrect information, which could have negative consequences as these systems become more integrated into our everyday lives.

“LLMs have become wildly popular, and evaluating their performance in a quantitative way is important, but we also need to evaluate how well these systems work with and can support people,” said co-first author Albert Jiang, from Cambridge’s Department of Computer Science and Technology. “We don’t yet have comprehensive ways of evaluating an LLM’s performance when interacting with humans.”

The standard way to evaluate LLMs relies on static pairs of inputs and outputs, which disregards the interactive nature of chatbots, and how that changes their usefulness in different scenarios. The researchers developed CheckMate to help answer these questions, designed for but not limited to applications in mathematics.

“When talking to mathematicians about LLMs, many of them fall into one of two main camps: either they think that LLMs can produce complex mathematical proofs on their own, or that LLMs are incapable of simple arithmetic,” said co-first author Katie Collins from the Department of Engineering. “Of course, the truth is probably somewhere in between, but we wanted to find a way of evaluating which tasks LLMs are suitable for and which they aren’t.”

The researchers recruited 25 mathematicians, from undergraduate students to senior professors, to interact with three different LLMs (InstructGPT, ChatGPT, and GPT-4) and evaluate their performance using CheckMate. Participants worked through undergraduate-level mathematical theorems with the assistance of an LLM and were asked to rate each individual LLM response for correctness and helpfulness. Participants did not know which LLM they were interacting with.

The researchers recorded the sorts of questions asked by participants, how participants reacted when they were presented with a fully or partially incorrect answer, whether and how they attempted to correct the LLM, or if they asked for clarification. Participants had varying levels of experience with writing effective prompts for LLMs, and this often affected the quality of responses that the LLMs provided.

An example of an effective prompt is “what is the definition of X” (X being a concept in the problem) as chatbots can be very good at retrieving concepts they know of and explaining it to the user.

“One of the things we found is the surprising fallibility of these models,” said Collins. “Sometimes, these LLMs will be really good at higher-level mathematics, and then they’ll fail at something far simpler. It shows that it’s vital to think carefully about how to use LLMs effectively and appropriately.”

However, like the LLMs, the human participants also made mistakes. The researchers asked participants to rate how confident they were in their own ability to solve the problem they were using the LLM for. In cases where the participant was less confident in their own abilities, they were more likely to rate incorrect generations by LLM as correct.

“This kind of gets to a big challenge of evaluating LLMs, because they’re getting so good at generating nice, seemingly correct natural language, that it’s easy to be fooled by their responses,” said Jiang. “It also shows that while human evaluation is useful and important, it’s nuanced, and sometimes it’s wrong. Anyone using an LLM, for any application, should always pay attention to the output and verify it themselves.”

Based on the results from CheckMate, the researchers say that newer generations of LLMs are increasingly able to collaborate helpfully and correctly with human users on undergraduate-level math problems, as long as the user can assess the correctness of LLM-generated responses.

Even if the answers may be memorized and can be found somewhere on the internet, LLMs have the advantage of being flexible in their inputs and outputs over traditional search engines (though should not replace search engines in their current form).

While CheckMate was tested on mathematical problems, the researchers say their platform could be adapted to a wide range of fields. In the future, this type of feedback could be incorporated into the LLMs themselves, although none of the CheckMate feedback from the current study has been fed back into the models.

“These kinds of tools can help the research community to have a better understanding of the strengths and weaknesses of these models,” said Collins. “We wouldn’t use them as tools to solve complex mathematical problems on their own, but they can be useful assistants if the users know how to take advantage of them.”

More information:
Katherine M. Collins et al, Evaluating language models for mathematics through interactions, Proceedings of the National Academy of Sciences (2024). DOI: 10.1073/pnas.2318124121

Citation:
New open-source platform allows users to evaluate performance of AI-powered chatbots (2024, June 4)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-source-platform-users-ai-powered.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

An eerie ‘digital afterlife’ is no longer science fiction. So how do we navigate the risks?

0
An eerie 'digital afterlife' is no longer science fiction. So how do we navigate the risks?


digital afterlife
Credit: Pixabay/CC0 Public Domain

Imagine a future where your phone pings with a message that your dead father’s “digital immortal” bot is ready. This promise of chatting with a virtual version of your loved one—perhaps through a virtual reality (VR) headset—is like stepping into a sci-fi movie, both thrilling and a bit eerie.

As you interact with this digital dad, you find yourself on an emotional rollercoaster. You uncover secrets and stories you never knew, changing how you remember the real person.

This is not a distant, hypothetical scenario. The digital afterlife industry is rapidly evolving. Several companies promise to create virtual reconstructions of deceased individuals based on their digital footprints.

From artificial intelligence (AI) chatbots and virtual avatars to holograms, this technology offers a strange blend of comfort and disruption. It may pull us into deeply personal experiences that blur the lines between past and present, memory and reality.

As the digital afterlife industry grows, it raises significant ethical and emotional challenges. These include concerns about consent, privacy and the psychological impact on the living.

What is the digital afterlife industry?

VR and AI technologies are making virtual reconstructions of our loved ones possible. Companies in this niche industry use data from social media posts, emails, text messages and voice recordings to create digital personas that can interact with the living.

Although still niche, the number of players in the digital afterlife industry is growing.

HereAfter allows users to record stories and messages during their lifetime, which can then be accessed by loved ones posthumously. MyWishes offers the ability to send pre-scheduled messages after death, maintaining a presence in the lives of the living.

Hanson Robotics has created robotic busts that interact with people using the memories and personality traits of the deceased. Project December grants users access to so-called “deep AI” to engage in text-based conversations with those who have passed away.

Generative AI also plays a crucial role in the digital afterlife industry. These technologies enable the creation of highly realistic and interactive digital personas. But the high level of realism may blur the line between reality and simulation. This may enhance the user experience, but may also cause emotional and psychological distress.

A technology ripe for misuse

Digital afterlife technologies may aid the grieving process by offering continuity and connection with the deceased. Hearing a loved one’s voice or seeing their likeness may provide comfort and help process the loss.

For some of us, these digital immortals could be therapeutic tools. They may help us to preserve positive memories and feel close to loved ones even after they have passed away.

But for others, the emotional impact may be profoundly negative, exacerbating grief rather than alleviating it. AI recreations of loved ones have the potential to cause psychological harm if the bereaved ends up having unwanted interactions with them. It’s essentially being subjected to a “digital haunting.”

Other major issues and ethical concerns surrounding this tech include consent, autonomy and privacy.

For example, the deceased may not have agreed to their data being used for a “digital afterlife.”

There’s also the risk of misuse and data manipulation. Companies could exploit digital immortals for commercial gain, using them to advertise products or services. Digital personas could be altered to convey messages or behaviors the deceased would never have endorsed.

We need regulation

To address concerns around this quickly emerging industry, we need to update our legal frameworks. We need to address issues such as digital estate planning, who inherits the digital personas of the deceased, and digital memory ownership.

The European Union’s General Data Protection Regulation (GDPR) recognizes post-mortem privacy rights, but faces challenges in enforcement.

Social media platforms control deceased users’ data access, often against heirs’ wishes, with clauses like “no right of survivorship” complicating matters. Limited platform practices hinder the GDPR’s effectiveness. Comprehensive protection demands reevaluating contractual rules, aligning with human rights.

The digital afterlife industry offers comfort and memory preservation, but raises ethical and emotional concerns. Implementing thoughtful regulations and ethical guidelines can honor both the living and the dead, to ensure digital immortality enhances our humanity.

What can we do?

Researchers have recommended several ethical guidelines and regulations. Some recommendations include:

  • obtaining informed and documented consent before creating digital personas from people before they die
  • age restrictions to protect vulnerable groups
  • clear disclaimers to ensure transparency
  • and strong data privacy and security measures.

Drawing from ethical frameworks in archaeology, a 2018 study has suggested treating digital remains as integral to personhood, proposing regulations to ensure dignity, especially in re-creation services.

Dialogue between policymakers, industry and academics is crucial for developing ethical and regulatory solutions. Providers should also offer ways for users to respectfully terminate their interactions with digital personas.

Through careful, responsible development, we can create a future where digital afterlife technologies meaningfully and respectfully honor our loved ones.

As we navigate this brave new world, it is crucial to balance the benefits of staying connected with our loved ones against the potential risks and ethical dilemmas.

By doing so, we can make sure the digital afterlife industry develops in a way that respects the memory of the deceased and supports the emotional well-being of the living.

Provided by
The Conversation


This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
An eerie ‘digital afterlife’ is no longer science fiction. So how do we navigate the risks? (2024, June 24)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-eerie-digital-afterlife-longer-science.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link