Sunday, March 9, 2025
Home Blog Page 1956

Apple delays rollout of AI features in Europe

0
Apple holds talks with rival Meta over AI: Report


Apple
Credit: Armand Valendez from Pexels

Apple on Friday said it would delay the rollout of its recently announced AI features in Europe because of “regulatory uncertainties” linked to the EU’s new landmark legislation to curb the power of big tech.

Citing the European Union’s Digital Markets Act (DMA), a spokesperson for the iPhone-making juggernaut said “we do not believe that we will be able to roll out these features to our EU users this year.”

Apple earlier this month unveiled “Apple Intelligence,” its suite of AI features for its coveted devices as it looks to reassure users that it is not falling behind on the AI frenzy.

The announcement included a partnership with OpenAI that would make ChatGPT available to iPhone users on request.

Apple said the feature, as well as its iPhone Mirroring and SharePlay Screen Sharing enhancements, were put on hold over concern “that the interoperability requirements of the DMA could force us to compromise the integrity of our products in ways that risk user privacy and data security.”

Apple Intelligence, which runs only on the company’s in-house technology, will enable users to create their own emojis based on a description in everyday language, or to generate brief summaries of emails in the mailbox.

“We are committed to collaborating with the European Commission in an attempt to find a solution that would enable us to deliver these features to our EU customers without compromising their safety,” the company added.

In an effort to instill fair competition in Europe, the DMA sets out a list of dos and don’ts for the specially designated internet gatekeepers that include Apple.

“The EU is an attractive market of 450 million potential users, and has always been open for business for any company that wants to provide services in the European internal market,” an EU spokesperson said.

“Gatekeepers are welcome to offer their services in Europe, provided that they comply with our rules aimed at ensuring fair competition,” the EU added.

The EU’s competition supremo Margrethe Vestager on Tuesday warned that Apple was falling short in abiding by the DMA as the bloc carries out a probe over Apple’s business practices.

“We have a number of Apple issues; I find them very serious. I was very surprised that we would have such suspicions of Apple being non-compliant,” Vestager told CNBC.

Her comments came after the Financial Times reported that Apple was about to face charges in relation to the probe, citing people close to the probe.

The DMA empowers the European Commission to investigate, fine and impose structural remedies on non-compliant gatekeepers.

Penalties can reach up to 10 percent of global annual turnover, with repeat offenders facing up to 20 percent.

© 2024 AFP

Citation:
Apple delays rollout of AI features in Europe (2024, June 21)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-apple-delays-rollout-ai-features.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Study explores ice-based electricity generation

0
Study explores ice-based electricity generation


Research on Ice Electricity is Heating Up
Cell construction. Top row from left: (A) Prototype cell from water ice showing the top electrode affixed to a wooden frame. (B) Wooden forms for fabricating the top cells. Forms were sanded and painted (high-gloss, two coats) inside to promote removal. Bottom row from left: (C) Freezing the top cell layers in wooden forms, for subsequent removal and transfer to the bus pans. The top electrodes in their wooden frames are also visible. (D) Middle layer preparation with glass spacers visible. Credit: PLOS ONE (2023). DOI: 10.1371/journal.pone.0285507

Last year, researchers from the US and Canada reported in PLOS ONE creating electrical batteries from ice. The electrical output is modest, just 0.1 milliwatt. But this may be a sign of good things to come. The scientists worked over the course of two seasons to design and produce electrochemical cells that will work to generate electricity.

Dr. Daniel Helman and Dr. Matthew Retallack met at the European Consortium for Political Research in Montreal, Canada in 2015. Helman was presenting ideas about solar panels from ice, and Retallack was the discussant of the session.

“I think there was a mutual respect and love of research,” says Helman. They went on to develop different prototypes. The model that finally won out uses acid to create a difference in pH between two layers of ice plus a few additives.

The most mobile charge carrier in ice is the proton, so it makes sense to think of protons traveling from one layer to the other because of the pH difference. The travel of charged particles is how batteries generate electricity.

In this case, table salt, kaolinite clay and monopotassium phosphate help to donate or receive charged particles, along with muriatic acid (HCl). A mesh screen and sheet aluminum were used as the electrodes.

The materials used in the experiments are all commonly available, and commonly regarded as safe. It makes one wonder what would be possible using more optimized materials. Moreover, photosensitive particles added in can probably produce dye-sensitized solar cells.

The original experiments for dye-sensitized solar cells used chlorophyll taken from spinach to change local pH in response to sunlight. While the electrical output may be small, the point is not lost that large swaths of land at high latitudes might be available.

Fields, lakes or other open land might be safely put to good use in humanity’s quest to transition away from fossil fuels. The additives in this experiment were chosen for their relative safety in the environment.

The generation of electricity from ice also sheds light on one of the more enduring questions facing science right now. Where did life come from? Current thought is that organisms originated either in small ponds near volcanic, geothermal fields, or near mid-ocean ridges. But there is a problem. RNA gets diluted without a membrane, and then cannot act as a catalyst.

An icy setting solves this problem. RNA may remain concentrated in small regions on, for example, the ice of a comet or meteorite. Thus, generation of electricity from ice could provide a proto-metabolism for the start of organismal development with self-catalyzing RNA on such icy meteorites (like the Murchison meteorite) or on an early Snowball Earth.

Both of these ideas are not so far fetched. Solar panels from ice may be useful in some settings. And an icy worlds hypothesis for the origin of life may explain why we don’t see any of the early stages here on Earth now.

Dr. Helman is currently a visiting assistant professor of Environmental Studies at Wofford College in Spartanburg, South Carolina. Dr. Retallack ran the experiments while at Carleton University in Toronto.

More information:
Daniel S. Helman et al, Electrochemical cells from water ice? Preliminary methods and results, PLOS ONE (2023). DOI: 10.1371/journal.pone.0285507

Provided by
Wofford College

Citation:
Study explores ice-based electricity generation (2024, June 10)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-explores-ice-based-electricity-generation.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

DeepMind demonstrates Genie, an AI app that can generate playable 2D worlds from a single image

0
DeepMind demonstrates Genie, an AI app that can generate playable 2D worlds from a single image


DeepMind demonstrates Genie, an AI app that can generate playable 2D worlds from a single image
Playing from Image Prompts: We can prompt Genie with images generated by text-to-image models, hand-drawn sketches or real-world photos. In each case we show the prompt frame and a second frame after taking one of the latent actions four consecutive times. In each case we see clear character movement, despite some of the images being visually distinct from the dataset. Credit: arXiv (2024). DOI: 10.48550/arxiv.2402.15391

AI researchers at Google’s DeepMind, working with colleagues at the University of British Columbia, have announced the development of Genie, an AI-backed application capable of turning a single image into a playable 2D virtual world.

The team has posted a paper on the arXiv preprint server outlining their work and have also posted an announcement page on DeepMind’s research site.

Two-dimensional video games, such as Super Mario Brothers, allow players to manipulate a character on a video screen as they proceed through a virtual world. In this new effort, the team at DeepMind has automated the process of creating 2D video games by allowing Genie to accept a single image, such as a character in front of an imagined background, and then using it to generate the rest of the game. This was made possible by training it on thousands of hours of video from hundreds of 2D video games.

To create Genie, the team first built an AI application that was able to tokenize video frames into millions of parameters that it could use to build new frames. They then added what they describe as a “latent action model” to make predictions about what a given next scene might look like based on the current image.

Next, they added a module to generate a dynamic model to make guesses about possible next sequences based on what it learned during the training phase. The result is a series of frames linked together to form what looks like a 2D virtual world.







Credit: Google DeepMind

The researchers acknowledge that Genie is still very much a work in progress. It has several limitations not easily seen in the examples provided. It takes a very long time to run, for example—it is approximately 20 to 30 times slower than what the average player would consider normal speed. It also makes a lot of mistakes—it can create unrealistic worlds that are not playable, for example. It is also currently limited in scope—it can only run 16 frames at a time.

Still, the team at DeepMind suggests that Genie demonstrates a new step forward in video game development, allowing users to generate their own games based on their own unique preferences.

More information:
Jake Bruce et al, Genie: Generative Interactive Environments, arXiv (2024). DOI: 10.48550/arxiv.2402.15391

Genie: Generative Interactive Environments: sites.google.com/view/genie-2024/home and
deepmind.google/research/publications/60474/

Journal information:
arXiv


© 2024 Science X Network

Citation:
DeepMind demonstrates Genie, an AI app that can generate playable 2D worlds from a single image (2024, March 6)
retrieved 24 June 2024
from https://techxplore.com/news/2024-03-deepmind-genie-ai-app-generate.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

New dimensions of haptics in virtual reality

0
New dimensions of haptics in virtual reality


Tricking the brain: New dimensions of haptics in virtual reality
André Zenner with the tubular controller “Shifty,” in which a movable weight is installed. Credit: Oliver Dietze, DFKI

How can virtual reality (VR) be experienced haptically, i.e., through the sense of touch? This is one of the fundamental questions that modern VR research is investigating.

Computer scientist André Zenner, who is based in Saarbrücken, Germany, has come a significant step closer to answering this question in his doctoral thesis—by inventing new devices and developing software-based techniques inspired by human perception. He has now been awarded the “Best Dissertation Award” at the world’s leading VR conference.

The award-winning work is about how physical props (technical term: “proxies”) can be used to make objects in virtual environments tangible.

“Of course, you can’t have a proxy for every virtual object, then the approach wouldn’t be scalable. In my dissertation, I, therefore, thought about what devices could look like that could be used to simulate the physical properties of several different virtual objects as effectively as possible,” explains Zenner, who completed his doctorate at the Saarbrücken Graduate School of Computer Science at Saarland University and is now conducting research at Saarland University and the German Research Center for Artificial Intelligence.

This resulted in the prototypes for two special VR controllers, “Shifty” and “Drag:on.” VR controllers are devices that can be held in the user’s hand to control or manipulate objects in virtual reality using tracking technology.






“Shifty” is a tubular controller in which a movable weight is installed. The weight can be moved along the lengthwise axis by a motor, changing the center of gravity and inertia of the rod.

“In combination with corresponding visualizations in virtual reality, Shifty can be used to create the illusion that a virtual object is getting longer or heavier,” explains Zenner. In experiments, he was able to prove that objects are perceived as lighter or smaller when the weight is close to the user’s hand and that, coupled with the corresponding visual input; they are perceived as longer and heavier the further the weight in the rod moves away from the user.

“This is mainly due to changes in the inertia of the controller, as the overall weight does not change,” explains Zenner. The research and development department of gaming giant Sony is already experimenting with this concept and cites Zenner’s work in the development of new VR controllers.






The second controller, “Drag:on,” consists of two flamenco fans that can be unfolded using servomotors, thus increasing the air resistance of the controller. This means that the further the fans are unfolded, the more force the user has to exert to move the controller through the air.

“Coupled with the right visual stimuli, Drag:on can be used to create the impression that the user is holding a small shovel or a large paddle, for example, or that they are pushing a heavy trolley or are twisting a knob that is difficult to turn,” explains Zenner.

Both controllers are basic research and so-called “proof of concepts.” This means that the prototypes can be used to show in user experiments that different controller states can improve the perception of different VR objects. Still, specific products using this technology are not yet available on the market.

With the controllers, the Saarbrücken-based computer scientist first addressed the so-called ‘similarity problem.” The aim here is to ensure that virtual and real objects feel as similar as possible. In the second part of his work, he dealt with the so-called “colocation problem,” i.e., the question of how the proxy can be spatially located in real life where the user sees it in virtual reality.






This is particularly challenging as the controllers act as proxies for different virtual objects. Consequently, the user must be given the illusion that they are reaching for various objects, although in reality, they will always grasp the same proxy.

To achieve this, the researcher made use of the already established method of “hand redirection.” As the name suggests, this involves redirecting the movement of the hand in virtual reality so that the user thinks they are reaching to the left, for example, even though they are actually stretching their hand forward.

“We conducted experiments to investigate the point at which users realize that their hand has been redirected. Our results showed that this point was reached quickly, so we thought about how we could better conceal the hand redirection,” says Zenner.

The solution: he tricked the brain by only redirecting the hand when the brain was blind to visual changes—namely during blinking. Together with a student under his supervision, he developed the appropriate software and used the eye trackers built into many VR headsets.

In control studies, the team was then able to show that their new controllers, in combination with hand redirection algorithms, led to more convincing VR perceptions than previously possible.

Provided by
Universität des Saarlandes

Citation:
Tricking the brain: New dimensions of haptics in virtual reality (2024, June 10)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-brain-dimensions-haptics-virtual-reality.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Researchers leverage shadows to model 3D scenes, including objects blocked from view

0
Researchers leverage shadows to model 3D scenes, including objects blocked from view


Researchers leverage shadows to model 3D scenes, including objects blocked from view
Plato-NeRF is a computer vision system that combines lidar measurements with machine learning to reconstruct a 3D scene, including hidden objects, from only one camera view by exploiting shadows. Here, the system accurately models the rabbit in the chair, even though that rabbit is blocked from view. Credit: Massachusetts Institute of Technology

Imagine driving through a tunnel in an autonomous vehicle, but unbeknownst to you, a crash has stopped traffic up ahead. Normally, you’d need to rely on the car in front of you to know you should start braking. But what if your vehicle could see around the car ahead and apply the brakes even sooner?

Researchers from MIT and Meta have developed a computer vision technique that could someday enable an autonomous vehicle to do just that.

They have introduced a method that creates physically accurate, 3D models of an entire scene, including areas blocked from view, using images from a single camera position. Their technique uses shadows to determine what lies in obstructed portions of the scene.

They call their approach PlatoNeRF, based on Plato’s allegory of the cave, a passage from the Greek philosopher’s “Republic” in which prisoners chained in a cave discern the reality of the outside world based on shadows cast on the cave wall.

By combining lidar (light detection and ranging) technology with machine learning, PlatoNeRF can generate more accurate reconstructions of 3D geometry than some existing AI techniques. Additionally, PlatoNeRF is better at smoothly reconstructing scenes where shadows are hard to see, such as those with high ambient light or dark backgrounds.

In addition to improving the safety of autonomous vehicles, PlatoNeRF could make AR/VR headsets more efficient by enabling a user to model the geometry of a room without the need to walk around taking measurements. It could also help warehouse robots find items in cluttered environments faster.

“Our key idea was taking these two things that have been done in different disciplines before and pulling them together—multibounce lidar and machine learning. It turns out that when you bring these two together, that is when you find a lot of new opportunities to explore and get the best of both worlds,” says Tzofi Klinghoffer, an MIT graduate student in media arts and sciences, affiliate of the MIT Media Lab, and lead author of the paper on PlatoNeRF.

Klinghoffer wrote the paper with his advisor, Ramesh Raskar, associate professor of media arts and sciences and leader of the Camera Culture Group at MIT; senior author Rakesh Ranjan, a director of AI research at Meta Reality Labs; as well as Siddharth Somasundaram at MIT, and Xiaoyu Xiang, Yuchen Fan, and Christian Richardt at Meta. The research is being presented at the Conference on Computer Vision and Pattern Recognition, held 17–21 June.

Shedding light on the problem

Reconstructing a full 3D scene from one camera viewpoint is a complex problem.

Some machine-learning approaches employ generative AI models that try to guess what lies in the occluded regions, but these models can hallucinate objects that aren’t really there. Other approaches attempt to infer the shapes of hidden objects using shadows in a color image, but these methods can struggle when shadows are hard to see.

For PlatoNeRF, the MIT researchers built off these approaches using a new sensing modality called single-photon lidar. Lidars map a 3D scene by emitting pulses of light and measuring the time it takes that light to bounce back to the sensor. Because single-photon lidars can detect individual photons, they provide higher-resolution data.

The researchers use a single-photon lidar to illuminate a target point in the scene. Some light bounces off that point and returns directly to the sensor. However, most of the light scatters and bounces off other objects before returning to the sensor. PlatoNeRF relies on these second bounces of light.

By calculating how long it takes light to bounce twice and then return to the lidar sensor, PlatoNeRF captures additional information about the scene, including depth. The second bounce of light also contains information about shadows.

The system traces the secondary rays of light—those that bounce off the target point to other points in the scene—to determine which points lie in shadow (due to an absence of light). Based on the location of these shadows, PlatoNeRF can infer the geometry of hidden objects.

The lidar sequentially illuminates 16 points, capturing multiple images that are used to reconstruct the entire 3D scene.

“Every time we illuminate a point in the scene, we are creating new shadows. Because we have all these different illumination sources, we have a lot of light rays shooting around, so we are carving out the region that is occluded and lies beyond the visible eye,” Klinghoffer says.

A winning combination

Key to PlatoNeRF is the combination of multibounce lidar with a special type of machine-learning model known as a neural radiance field (NeRF). A NeRF encodes the geometry of a scene into the weights of a neural network, which gives the model a strong ability to interpolate, or estimate, novel views of a scene.

This ability to interpolate also leads to highly accurate scene reconstructions when combined with multibounce lidar, Klinghoffer says.

“The biggest challenge was figuring out how to combine these two things. We really had to think about the physics of how light is transporting with multibounce lidar and how to model that with machine learning,” he says.

They compared PlatoNeRF to two common alternative methods, one that only uses lidar and the other that only uses a NeRF with a color image.

They found that their method was able to outperform both techniques, especially when the lidar sensor had lower resolution. This would make their approach more practical to deploy in the real world, where lower resolution sensors are common in commercial devices.

“About 15 years ago, our group invented the first camera to ‘see’ around corners, that works by exploiting multiple bounces of light, or ‘echoes of light.’ Those techniques used special lasers and sensors, and used three bounces of light. Since then, lidar technology has become more mainstream, that led to our research on cameras that can see through fog,” Raskar says.

“This new work uses only two bounces of light, which means the signal to noise ratio is very high, and 3D reconstruction quality is impressive.”

In the future, the researchers want to try tracking more than two bounces of light to see how that could improve scene reconstructions. In addition, they are interested in applying more deep learning techniques and combining PlatoNeRF with color image measurements to capture texture information.

“While camera images of shadows have long been studied as a means to 3D reconstruction, this work revisits the problem in the context of lidar, demonstrating significant improvements in the accuracy of reconstructed hidden geometry. The work shows how clever algorithms can enable extraordinary capabilities when combined with ordinary sensors—including the lidar systems that many of us now carry in our pocket,” says David Lindell, an assistant professor in the Department of Computer Science at the University of Toronto, who was not involved with this work.

More information:
PlatoNeRF: 3D Reconstruction in Plato’s Cave via Single-View Two-Bounce Lidar

This story is republished courtesy of MIT News (web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and teaching.

Citation:
Researchers leverage shadows to model 3D scenes, including objects blocked from view (2024, June 18)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-leverage-shadows-3d-scenes-blocked.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link