Saturday, November 23, 2024
Home Blog Page 1133

Researchers create more precise 3D reconstructions using only two camera perspectives

0
Researchers create more precise 3D reconstructions using only two camera perspectives


From two images to a 3D object
Fields of application for 3D reconstructions include autonomous driving and monument conservation. Credit: Technical University Munich

In recent years, neural methods have become widespread in camera-based reconstructions. In most cases, however, hundreds of camera perspectives are needed. Meanwhile, conventional photometric methods exist which can compute highly precise reconstructions even from objects with textureless surfaces. However, these typically work only under controlled lab conditions.

Daniel Cremers, professor of Computer Vision and Artificial Intelligence at TUM and leader of the Munich Center for Machine Learning (MCML) and a director of the Munich Data Science Institute (MDSI) has developed a method together with his team that utilizes the two approaches.

It combines a neural network of the surface with a precise model of the illumination process that considers the light absorption and the distance between the object and the light source. The brightness in the images is used to determine the angle and distance of the surface relative to the light source.

“That enables us to model the objects with much greater precision than existing processes. We can use the natural surroundings and can reconstruct relatively textureless objects for our reconstructions,” says Cremers.

The paper is published on the arXiv preprint server and will be presented at the Conference on Computer Vision and Pattern Recognition (CVPR 2024) held in Seattle from June 17 to June 21, 2024.

Applications in autonomous driving and preservation of historical artifacts

The method can be used to preserve historical monuments or digitize museum exhibits. If these are destroyed or decay over time, photographic images can be used to reconstruct the originals and create authentic replicas.

The team of Prof. Cremers also develops neural camera-based reconstruction methods for autonomous driving, where a camera films the vehicle’s surroundings. The autonomous car can model its surroundings in real-time, develop a three-dimensional representation of the scene, and use it to make decisions.

The process is based on neural networks that predict 3D point clouds for individual video images that are then merged into a large-scale model of the roads traveled.

More information:
Mohammed Brahimi et al, Sparse Views, Near Light: A Practical Paradigm for Uncalibrated Point-light Photometric Stereo, arXiv (2024). DOI: 10.48550/arxiv.2404.00098

Journal information:
arXiv


Citation:
Researchers create more precise 3D reconstructions using only two camera perspectives (2024, June 20)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-precise-3d-reconstructions-camera-perspectives.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Turing test study shows humans rate artificial intelligence as more ‘moral’ than other people

0
Turing test study shows humans rate artificial intelligence as more 'moral' than other people


human-AI
Credit: Pixabay/CC0 Public Domain

A new study has found that when people are presented with two answers to an ethical question, most will think the answer from artificial intelligence (AI) is better than the response from another person.

“Attributions Toward Artificial Agents in a Modified Moral Turing Test,” a study conducted by Eyal Aharoni, an associate professor in Georgia State’s Psychology Department, was inspired by the explosion of ChatGPT and similar AI large language models (LLMs) which came onto the scene last March.

“I was already interested in moral decision-making in the legal system, but I wondered if ChatGPT and other LLMs could have something to say about that,” Aharoni said. “People will interact with these tools in ways that have moral implications, like the environmental implications of asking for a list of recommendations for a new car. Some lawyers have already begun consulting these technologies for their cases, for better or for worse.”

“So, if we want to use these tools, we should understand how they operate, their limitations and that they’re not necessarily operating in the way we think when we’re interacting with them.”

To test how AI handles issues of morality, Aharoni designed a form of a Turing test.

“Alan Turing, one of the creators of the computer, predicted that by the year 2000, computers might pass a test where you present an ordinary human with two interactants, one human and the other a computer, but they’re both hidden and their only way of communicating is through text. Then the human is free to ask whatever questions they want to in order to try to get the information they need to decide which of the two interactants is human and which is the computer,” Aharoni said.

“If the human can’t tell the difference, then, by all intents and purposes, the computer should be called intelligent, in Turing’s view.”

For his Turing test, Aharoni asked undergraduate students and AI the same ethical questions and then presented their written answers to participants in the study. They were then asked to rate the answers for various traits, including virtuousness, intelligence and trustworthiness.

“Instead of asking the participants to guess if the source was human or AI, we just presented the two sets of evaluations side by side, and we just let people assume that they were both from people,” Aharoni said. “Under that false assumption, they judged the answers’ attributes like ‘How much do you agree with this response, which response is more virtuous?'”

Overwhelmingly, the ChatGPT-generated responses were rated more highly than the human-generated ones.

“After we got those results, we did the big reveal and told the participants that one of the answers was generated by a human and the other by a computer and asked them to guess which was which,” Aharoni said.

For an AI to pass the Turing test, humans must not be able to tell the difference between AI responses and human ones. In this case, people could tell the difference, but not for an obvious reason.

“The twist is that the reason people could tell the difference appears to be because they rated ChatGPT’s responses as superior,” Aharoni said. “If we had done this study five to 10 years ago, then we might have predicted that people could identify the AI because of how inferior its responses were. But we found the opposite—that the AI, in a sense, performed too well.”

According to Aharoni, this finding has interesting implications for the future of humans and AI.

“Our findings lead us to believe that a computer could technically pass a moral Turing test—that it could fool us in its moral reasoning. Because of this, we need to try to understand its role in our society because there will be times when people don’t know that they’re interacting with a computer, and there will be times when they do know and they will consult the computer for information because they trust it more than other people,” Aharoni said.

“People are going to rely on this technology more and more, and the more we rely on it, the greater the risk becomes over time.”

The findings are published in the journal Scientific Reports.

More information:
Eyal Aharoni et al, Attributions toward artificial agents in a modified Moral Turing Test, Scientific Reports (2024). DOI: 10.1038/s41598-024-58087-7

Citation:
Turing test study shows humans rate artificial intelligence as more ‘moral’ than other people (2024, May 6)
retrieved 24 June 2024
from https://techxplore.com/news/2024-05-turing-humans-artificial-intelligence-moral.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

European tech must keep pace with US, China: Meta’s Clegg

0
European tech must keep pace with US, China: Meta's Clegg


innovation
Credit: CC0 Public Domain

Europe is lagging behind both the United States and China when it comes to technology and innovation, top executive with US firm Meta Nick Clegg has told AFP.

Clegg, president of global affairs at the parent company of Facebook, Instagram and WhatsApp, said Europe had a “real problem”.

“We are falling very rapidly behind the US and China,” said Clegg, who was promoting a scheme to mentor startups on the continent.

“I think for too long, the view has been that Europe’s only role is to regulate. And then China imitates and America innovates.”

But he argued it was not possible to “build success on the back of a law”.

“You build success on the back of innovation, entrepreneurship, and a partnership between big tech companies and small startups.”

Clegg was promoting a scheme run by Meta and two French companies to offer five European startups six months of mentoring and access to their facilities.

Clegg has spearheaded previous efforts by Meta to invest in tech in Europe, announcing in 2021 that the US firm would create 10,000 jobs there to help build the “metaverse”.

Meta burnt through billions of dollars trying to make its metaverse project a reality but has since changed focus to artificial intelligence and announced thousands of layoffs, including in the teams working on the metaverse.

© 2024 AFP

Citation:
European tech must keep pace with US, China: Meta’s Clegg (2024, June 24)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-european-tech-pace-china-meta.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Using drones will advance the inspection of remote runways in Canada and beyond, research suggests

0
Using drones will advance the inspection of remote runways in Canada and beyond, research suggests


Using drones will revolutionize the inspection of remote runways in Canada and beyond, research suggests
Visualized results overlaying a satellite map. The bright green contours highlight all detected targets, including runways, vegetation, water, and rough surfaces. They are distinguished by various filling colors, e.g., runway as purple, water as blue, vegetation as green, and rough surfaces as red. Credit: Drones (2024). DOI: 10.3390/drones8060225

With weather, limited flights and long distances, gravel runways at remote airports—particularly in northern Canada—are difficult to get to, let alone to inspect for safety.

So Northeastern University researcher Michal Aibin and his team have developed a more thorough, safer and faster way to inspect such runways using drones, computer vision and artificial intelligence. The work has been published in the journal Drones.

“Basically, what you do is you start the drone, you collect the data and—with coffee in your hand—you can inspect the entire runway,” says Aibin, visiting associate teaching professor of computer science at Northeastern’s Vancouver campus.

There are over 100 airports in Canada that are considered remote, Aibin says, meaning that they have no road or standard means of transportation leading to them. Thus, nearby communities’ food, medicine and other supplies all come by air.

The airports also predominantly feature gravel rather than asphalt runways, making them particularly susceptible to the elements.

But safety inspections are difficult. Engineers who inspect the remote airports must schedule a long flight, often during a narrow window of time dependent on the seasons, weather conditions and more.

A new, more reliable and less time-consuming method was needed.

So, Aibin worked with Northeastern associate teaching professor Lino Coria and student researchers to identify several types of defects for gravel runways, such as surface water pooling, encroaching vegetation, and smoothness defects like frost heaves, potholes and random large rocks.

Collaborating with Transport Canada (the Canadian government’s department of transportation) and Spexi Geospatial Inc., the researchers used computer vision and artificial intelligence to analyze drone images of remote runways in order to detect, characterize and classify defects.

“Our biggest novelty is we take all the images of the runway and we assess all the defects—like there’s some rocks, there’s maybe a hole, there’s maybe some aspects that are not initially visible to the human eye,” Aibin says.

The result is a new procedure for inspecting airport runways using high-resolution photos taken from remote-controlled, commercially available drones and high-powered computing. The new method proved effective when demonstrated at several remote airports, Aibin says.

The process doesn’t totally eliminate humans—a person must fly the drone and evaluate the computer analysis, Aibin notes (although those tasks can be done remotely). But Aibin says the method saves time, reduces the need for inspectors on site, and makes inspecting a remote gravel runway a much less onerous task.

Aibin says that the next step is providing more real-world applications to test the new method. But he sees the method being expanded beyond remote Canada into other remote sections of the world such as in Australia and New Zealand.

“The need to fly an engineer to the site is no longer needed, which was the ultimate goal,” Aibin says. “As long as someone can fly a drone and take images, then it can be sent in the form of a report to speed up the process.”

More information:
Zhiyuan Yang et al, Next-Gen Remote Airport Maintenance: UAV-Guided Inspection and Maintenance Using Computer Vision, Drones (2024). DOI: 10.3390/drones8060225

This story is republished courtesy of Northeastern Global News news.northeastern.edu.

Citation:
Using drones will advance the inspection of remote runways in Canada and beyond, research suggests (2024, June 14)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-drones-advance-remote-runways-canada.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Stand-up comedians test ability of LLMs to write jokes

0
Stand-up comedians test ability of LLMs to write jokes


comedians use LLMs to write a stand-up routine
Credit: AI-generated image

A small team of AI researchers at Google’s DeepMind project has found that LLMs are not very good at writing jokes that are funny. They asked stand-up comedians to use LLMs to write a stand-up routine for them and posted their findings on the arXiv preprint server.

To be successful, most stand-up comedians have to write a stand-up routine and perform it on stage. Such routines, or monologs, typically involve both storytelling and jokes, or describe humorous situations. Many also employ surprising or incongruous remarks, giving the audience a sudden insight into something they may not have considered in a certain way before.

Most professional stand-up comedians spend a great deal of time polishing their routines and testing them on small audiences before performing in front of large crowds or on television specials.

In this new effort, the team at DeepMind wondered if LLMs might be capable of creating not just jokes, but entire stand-up routines. To find out, they recruited 20 professional stand-up comedians who had used LLMs in their work before. The performers used an LLM to help them write an entire routine and then rated the results.

The researchers found that LLMs were quite good at coming up with jokes; unfortunately, few if any were funny. Most, they suggested, were generic in nature and few offered anything in the way of a surprise.

Stand-up comedians test ability of LLMs to write jokes
Left: Evaluation of the instruction-tuned LLMs as creativity support tool for writing comedy using a Likert scale; each row corresponds to a question in a survey. Right box plots (box for quartiles and whiskers for min and max) show the break-down of the Creativity Support Index, respectively. Credit: arXiv (2024). DOI: 10.48550/arxiv.2405.20956

Overall, the comedians found the AI-generated jokes to lack the cutting edge that is typically needed for a joke to be funny. Many described the results as bland. But some of them did find the LLM to be useful in generating a routine that could be used for creating a basic structure around which they could build their own jokes.

The research team suggests the results were not surprising considering that makers of LLMs use filters to prevent them from generating output that could be offensive or edgy.

More information:
Piotr Wojciech Mirowski et al, A Robot Walks into a Bar: Can Language Models Serve as Creativity Support Tools for Comedy? An Evaluation of LLMs’ Humour Alignment with Comedians, arXiv (2024). DOI: 10.48550/arxiv.2405.20956

Journal information:
arXiv


© 2024 Science X Network

Citation:
Stand-up comedians test ability of LLMs to write jokes (2024, June 21)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-comedians-ability-llms.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link