Europe is lagging behind both the United States and China when it comes to technology and innovation, top executive with US firm Meta Nick Clegg has told AFP.
Clegg, president of global affairs at the parent company of Facebook, Instagram and WhatsApp, said Europe had a “real problem”.
“We are falling very rapidly behind the US and China,” said Clegg, who was promoting a scheme to mentor startups on the continent.
“I think for too long, the view has been that Europe’s only role is to regulate. And then China imitates and America innovates.”
But he argued it was not possible to “build success on the back of a law”.
“You build success on the back of innovation, entrepreneurship, and a partnership between big tech companies and small startups.”
Clegg was promoting a scheme run by Meta and two French companies to offer five European startups six months of mentoring and access to their facilities.
Clegg has spearheaded previous efforts by Meta to invest in tech in Europe, announcing in 2021 that the US firm would create 10,000 jobs there to help build the “metaverse”.
Meta burnt through billions of dollars trying to make its metaverse project a reality but has since changed focus to artificial intelligence and announced thousands of layoffs, including in the teams working on the metaverse.
Citation:
European tech must keep pace with US, China: Meta’s Clegg (2024, June 24)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-european-tech-pace-china-meta.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
With weather, limited flights and long distances, gravel runways at remote airports—particularly in northern Canada—are difficult to get to, let alone to inspect for safety.
So Northeastern University researcher Michal Aibin and his team have developed a more thorough, safer and faster way to inspect such runways using drones, computer vision and artificial intelligence. The work has been published in the journal Drones.
“Basically, what you do is you start the drone, you collect the data and—with coffee in your hand—you can inspect the entire runway,” says Aibin, visiting associate teaching professor of computer science at Northeastern’s Vancouver campus.
There are over 100 airports in Canada that are considered remote, Aibin says, meaning that they have no road or standard means of transportation leading to them. Thus, nearby communities’ food, medicine and other supplies all come by air.
The airports also predominantly feature gravel rather than asphalt runways, making them particularly susceptible to the elements.
But safety inspections are difficult. Engineers who inspect the remote airports must schedule a long flight, often during a narrow window of time dependent on the seasons, weather conditions and more.
A new, more reliable and less time-consuming method was needed.
So, Aibin worked with Northeastern associate teaching professor Lino Coria and student researchers to identify several types of defects for gravel runways, such as surface water pooling, encroaching vegetation, and smoothness defects like frost heaves, potholes and random large rocks.
Collaborating with Transport Canada (the Canadian government’s department of transportation) and Spexi Geospatial Inc., the researchers used computer vision and artificial intelligence to analyze drone images of remote runways in order to detect, characterize and classify defects.
“Our biggest novelty is we take all the images of the runway and we assess all the defects—like there’s some rocks, there’s maybe a hole, there’s maybe some aspects that are not initially visible to the human eye,” Aibin says.
The result is a new procedure for inspecting airport runways using high-resolution photos taken from remote-controlled, commercially available drones and high-powered computing. The new method proved effective when demonstrated at several remote airports, Aibin says.
The process doesn’t totally eliminate humans—a person must fly the drone and evaluate the computer analysis, Aibin notes (although those tasks can be done remotely). But Aibin says the method saves time, reduces the need for inspectors on site, and makes inspecting a remote gravel runway a much less onerous task.
Aibin says that the next step is providing more real-world applications to test the new method. But he sees the method being expanded beyond remote Canada into other remote sections of the world such as in Australia and New Zealand.
“The need to fly an engineer to the site is no longer needed, which was the ultimate goal,” Aibin says. “As long as someone can fly a drone and take images, then it can be sent in the form of a report to speed up the process.”
More information:
Zhiyuan Yang et al, Next-Gen Remote Airport Maintenance: UAV-Guided Inspection and Maintenance Using Computer Vision, Drones (2024). DOI: 10.3390/drones8060225
This story is republished courtesy of Northeastern Global News news.northeastern.edu.
Citation:
Using drones will advance the inspection of remote runways in Canada and beyond, research suggests (2024, June 14)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-drones-advance-remote-runways-canada.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
A small team of AI researchers at Google’s DeepMind project has found that LLMs are not very good at writing jokes that are funny. They asked stand-up comedians to use LLMs to write a stand-up routine for them and posted their findings on the arXiv preprint server.
To be successful, most stand-up comedians have to write a stand-up routine and perform it on stage. Such routines, or monologs, typically involve both storytelling and jokes, or describe humorous situations. Many also employ surprising or incongruous remarks, giving the audience a sudden insight into something they may not have considered in a certain way before.
Most professional stand-up comedians spend a great deal of time polishing their routines and testing them on small audiences before performing in front of large crowds or on television specials.
In this new effort, the team at DeepMind wondered if LLMs might be capable of creating not just jokes, but entire stand-up routines. To find out, they recruited 20 professional stand-up comedians who had used LLMs in their work before. The performers used an LLM to help them write an entire routine and then rated the results.
The researchers found that LLMs were quite good at coming up with jokes; unfortunately, few if any were funny. Most, they suggested, were generic in nature and few offered anything in the way of a surprise.
Overall, the comedians found the AI-generated jokes to lack the cutting edge that is typically needed for a joke to be funny. Many described the results as bland. But some of them did find the LLM to be useful in generating a routine that could be used for creating a basic structure around which they could build their own jokes.
The research team suggests the results were not surprising considering that makers of LLMs use filters to prevent them from generating output that could be offensive or edgy.
More information:
Piotr Wojciech Mirowski et al, A Robot Walks into a Bar: Can Language Models Serve as Creativity Support Tools for Comedy? An Evaluation of LLMs’ Humour Alignment with Comedians, arXiv (2024). DOI: 10.48550/arxiv.2405.20956
Citation:
Stand-up comedians test ability of LLMs to write jokes (2024, June 21)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-comedians-ability-llms.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Sometimes it seems like an AI is helping, but the benefit is actually a placebo effect—people performing better simply because they expect to be doing so—according to new research from Aalto University. The study also shows how difficult it is to shake people’s trust in the capabilities of AI systems.
In this study, participants were tasked with a simple letter recognition exercise. They performed the task once on their own and once supposedly aided by an AI system. Half of the participants were told the system was reliable and it would enhance their performance, and the other half was told that it was unreliable and would worsen their performance.
The findings are published in Proceedings of the CHI Conference on Human Factors in Computing Systems.
“In fact, neither AI system ever existed. Participants were led to believe an AI system was assisting them, when in reality, what the sham-AI was doing was completely random,” explains doctoral researcher Agnes Kloft.
The participants had to pair letters that popped up on screen at varying speeds. Surprisingly, both groups performed the exercise more efficiently—more quickly and attentively—when they believed an AI was involved.
“What we discovered is that people have extremely high expectations of these systems, and we can’t make them AI doomers simply by telling them a program doesn’t work,” says Assistant Professor Robin Welsch.
Following the initial experiments, the researchers conducted an online replication study that produced similar results. They also introduced a qualitative component, inviting participants to describe their expectations of performing with an AI. Most had a positive outlook toward AI and, surprisingly even skeptical people still had positive expectations about its performance.
The findings pose a problem for the methods generally used to evaluate emerging AI systems. “This is the big realization coming from our study—that it’s hard to evaluate programs that promise to help you because of this placebo effect,” Welsch says.
While powerful technologies like large language models undoubtedly streamline certain tasks, subtle differences between versions may be amplified or masked by the placebo effect—and this is effectively harnessed through marketing.
The results also pose a significant challenge for research on human-computer interaction, since expectations would influence the outcome unless placebo control studies were used.
“These results suggest that many studies in the field may have been skewed in favor of AI systems,” concludes Welsch.
More information:
Agnes Mercedes Kloft et al, “AI enhances our performance, I have no doubt this one will do the same”: The Placebo effect is robust to negative descriptions of AI, Proceedings of the CHI Conference on Human Factors in Computing Systems (2024). DOI: 10.1145/3613904.3642633
Citation:
Just believing that an AI is helping boosts your performance, study finds (2024, May 13)
retrieved 24 June 2024
from https://techxplore.com/news/2024-05-believing-ai-boosts.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Apple is talking to major rival Meta about integrating the Facebook parent company’s generative AI into its products, as it tries to catch up with rivals on artificial intelligence, the Wall Street Journal reported Sunday.
The report comes after Apple also struck a deal with OpenAI, the creator of ChatGPT, to help equip its Apple Intelligence suite of new AI features for its coveted products.
For months, pressure has been on Apple to persuade doubters on its AI strategy, after Microsoft and Google rolled out products in rapid-fire succession.
It has developed its own, smaller artificial intelligence but said that it will turn to others such as OpenAI to boost its in-house offering.
According to the Journal, which cited sources close to the matter, Meta has held discussions with Apple over integrating its own generative AI model into Apple Intelligence.
Apple senior vice president of software engineering Craig Federighi said in early June that Apple also wanted to integrate capabilities from Google’s generative AI system, Gemini, into its devices.
The big challenge for Apple has been how to infuse ChatGPT-style AI—which voraciously feeds off data—into its products without weakening its heavily promoted user privacy and security, according to analysts.
Apple Intelligence will enable users to create their own emojis based on a description in everyday language, or to generate brief summaries of e-mails in the mailbox.
Apple said Siri, its voice assistant, will also get an AI-infused upgrade and now will appear as a pulsating light on the edge of your home screen.
Launched over 12 years ago, Siri has long since been seen as a dated feature, overtaken by the new generation of assistants, such as GPT-4o, OpenAI’s latest offering.
According to Canalys, 16 percent of smartphones shipped this year will be equipped with generative AI features, a proportion it expects to rise to 54 percent by 2028.
Citation:
Apple holds talks with rival Meta over AI: Report (2024, June 24)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-apple-rival-meta-ai.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.