Tuesday, December 24, 2024
Home Blog Page 1383

Researchers leverage shadows to model 3D scenes, including objects blocked from view

0
Researchers leverage shadows to model 3D scenes, including objects blocked from view


Researchers leverage shadows to model 3D scenes, including objects blocked from view
Plato-NeRF is a computer vision system that combines lidar measurements with machine learning to reconstruct a 3D scene, including hidden objects, from only one camera view by exploiting shadows. Here, the system accurately models the rabbit in the chair, even though that rabbit is blocked from view. Credit: Massachusetts Institute of Technology

Imagine driving through a tunnel in an autonomous vehicle, but unbeknownst to you, a crash has stopped traffic up ahead. Normally, you’d need to rely on the car in front of you to know you should start braking. But what if your vehicle could see around the car ahead and apply the brakes even sooner?

Researchers from MIT and Meta have developed a computer vision technique that could someday enable an autonomous vehicle to do just that.

They have introduced a method that creates physically accurate, 3D models of an entire scene, including areas blocked from view, using images from a single camera position. Their technique uses shadows to determine what lies in obstructed portions of the scene.

They call their approach PlatoNeRF, based on Plato’s allegory of the cave, a passage from the Greek philosopher’s “Republic” in which prisoners chained in a cave discern the reality of the outside world based on shadows cast on the cave wall.

By combining lidar (light detection and ranging) technology with machine learning, PlatoNeRF can generate more accurate reconstructions of 3D geometry than some existing AI techniques. Additionally, PlatoNeRF is better at smoothly reconstructing scenes where shadows are hard to see, such as those with high ambient light or dark backgrounds.

In addition to improving the safety of autonomous vehicles, PlatoNeRF could make AR/VR headsets more efficient by enabling a user to model the geometry of a room without the need to walk around taking measurements. It could also help warehouse robots find items in cluttered environments faster.

“Our key idea was taking these two things that have been done in different disciplines before and pulling them together—multibounce lidar and machine learning. It turns out that when you bring these two together, that is when you find a lot of new opportunities to explore and get the best of both worlds,” says Tzofi Klinghoffer, an MIT graduate student in media arts and sciences, affiliate of the MIT Media Lab, and lead author of the paper on PlatoNeRF.

Klinghoffer wrote the paper with his advisor, Ramesh Raskar, associate professor of media arts and sciences and leader of the Camera Culture Group at MIT; senior author Rakesh Ranjan, a director of AI research at Meta Reality Labs; as well as Siddharth Somasundaram at MIT, and Xiaoyu Xiang, Yuchen Fan, and Christian Richardt at Meta. The research is being presented at the Conference on Computer Vision and Pattern Recognition, held 17–21 June.

Shedding light on the problem

Reconstructing a full 3D scene from one camera viewpoint is a complex problem.

Some machine-learning approaches employ generative AI models that try to guess what lies in the occluded regions, but these models can hallucinate objects that aren’t really there. Other approaches attempt to infer the shapes of hidden objects using shadows in a color image, but these methods can struggle when shadows are hard to see.

For PlatoNeRF, the MIT researchers built off these approaches using a new sensing modality called single-photon lidar. Lidars map a 3D scene by emitting pulses of light and measuring the time it takes that light to bounce back to the sensor. Because single-photon lidars can detect individual photons, they provide higher-resolution data.

The researchers use a single-photon lidar to illuminate a target point in the scene. Some light bounces off that point and returns directly to the sensor. However, most of the light scatters and bounces off other objects before returning to the sensor. PlatoNeRF relies on these second bounces of light.

By calculating how long it takes light to bounce twice and then return to the lidar sensor, PlatoNeRF captures additional information about the scene, including depth. The second bounce of light also contains information about shadows.

The system traces the secondary rays of light—those that bounce off the target point to other points in the scene—to determine which points lie in shadow (due to an absence of light). Based on the location of these shadows, PlatoNeRF can infer the geometry of hidden objects.

The lidar sequentially illuminates 16 points, capturing multiple images that are used to reconstruct the entire 3D scene.

“Every time we illuminate a point in the scene, we are creating new shadows. Because we have all these different illumination sources, we have a lot of light rays shooting around, so we are carving out the region that is occluded and lies beyond the visible eye,” Klinghoffer says.

A winning combination

Key to PlatoNeRF is the combination of multibounce lidar with a special type of machine-learning model known as a neural radiance field (NeRF). A NeRF encodes the geometry of a scene into the weights of a neural network, which gives the model a strong ability to interpolate, or estimate, novel views of a scene.

This ability to interpolate also leads to highly accurate scene reconstructions when combined with multibounce lidar, Klinghoffer says.

“The biggest challenge was figuring out how to combine these two things. We really had to think about the physics of how light is transporting with multibounce lidar and how to model that with machine learning,” he says.

They compared PlatoNeRF to two common alternative methods, one that only uses lidar and the other that only uses a NeRF with a color image.

They found that their method was able to outperform both techniques, especially when the lidar sensor had lower resolution. This would make their approach more practical to deploy in the real world, where lower resolution sensors are common in commercial devices.

“About 15 years ago, our group invented the first camera to ‘see’ around corners, that works by exploiting multiple bounces of light, or ‘echoes of light.’ Those techniques used special lasers and sensors, and used three bounces of light. Since then, lidar technology has become more mainstream, that led to our research on cameras that can see through fog,” Raskar says.

“This new work uses only two bounces of light, which means the signal to noise ratio is very high, and 3D reconstruction quality is impressive.”

In the future, the researchers want to try tracking more than two bounces of light to see how that could improve scene reconstructions. In addition, they are interested in applying more deep learning techniques and combining PlatoNeRF with color image measurements to capture texture information.

“While camera images of shadows have long been studied as a means to 3D reconstruction, this work revisits the problem in the context of lidar, demonstrating significant improvements in the accuracy of reconstructed hidden geometry. The work shows how clever algorithms can enable extraordinary capabilities when combined with ordinary sensors—including the lidar systems that many of us now carry in our pocket,” says David Lindell, an assistant professor in the Department of Computer Science at the University of Toronto, who was not involved with this work.

More information:
PlatoNeRF: 3D Reconstruction in Plato’s Cave via Single-View Two-Bounce Lidar

This story is republished courtesy of MIT News (web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and teaching.

Citation:
Researchers leverage shadows to model 3D scenes, including objects blocked from view (2024, June 18)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-leverage-shadows-3d-scenes-blocked.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Facial recognition startup Clearview AI settles privacy suit

0
Facial recognition startup Clearview AI settles privacy suit


Facial recognition startup Clearview AI settles privacy suit
Hoan Ton-That, CEO of Clearview AI, demonstrates the company’s facial recognition software using a photo of himself in New York on Tuesday, Feb. 22, 2022. Facial recognition startup Clearview AI has reached a settlement in a lawsuit alleging its massive collection of images violated the subjects’ privacy rights. Attorneys estimate the deal could be worth more than $50 million. Credit: AP Photo/Seth Wenig, File

Facial recognition startup Clearview AI reached a settlement Friday in an Illinois lawsuit alleging its massive photographic collection of faces violated the subjects’ privacy rights, a deal that attorneys estimate could be worth more than $50 million.

But the unique agreement gives plaintiffs in the federal suit a share of the company’s potential value, rather than a traditional payout. Attorneys’ fees estimated at $20 million also would come out of the settlement amount.

Judge Sharon Johnson Coleman, of the Northern District of Illinois, gave preliminary approval to the agreement Friday.

The case consolidated lawsuits from around the U.S. filed against Clearview, which pulled photos from social media and elsewhere on the internet to create a database it sold to businesses, individuals and government entities.

The company settled a separate case alleging violation of privacy rights in Illinois in 2022, agreeing to stop selling access to its database to private businesses or individuals. That agreement still allowed Clearview to work with federal agencies and local law enforcement outside Illinois, which has a strict digital privacy law.

Clearview does not admit any liability as part of the latest settlement agreement.

“Clearview AI is pleased to have reached an agreement on this class action settlement,” James Thompson, an attorney representing the company in the suit, said in a written statement Friday.

The lead plaintiffs’ attorney Jon Loevy said the agreement was a “creative solution” necessitated by Clearview’s financial status.

“Clearview did not have anywhere near the cash to pay fair compensation to the class, so we needed to find a creative solution,” Loevy said in a statement. “Under the settlement, the victims whose privacy was breached now get to participate in any upside that is ultimately generated, thereby recapturing to the class to some extent the ownership of their biometrics.”

It’s not clear how many people would be eligible to join the settlement. The agreement language is sweeping, including anyone whose images or data are in the company’s database and who lived in the U.S. starting on July 1, 2017.

A national campaign to notify potential plaintiffs is part of the agreement.

The attorneys for Clearview and the plaintiffs worked with Wayne Andersen, a retired federal judge who now mediates legal cases, to develop the settlement. In court filings presenting the agreement, Andersen bluntly writes that the startup could not have paid any legal judgment if the suit went forward.

“Clearview did not have the funds to pay a multi-million-dollar judgment,” he is quoted in the filing. “Indeed, there was great uncertainty as to whether Clearview would even have enough money to make it through to the end of trial, much less fund a judgment.”

But some privacy advocates and people pursuing other legal action called the agreement a disappointment that won’t change the company’s operations.

Sejal Zota is an attorney and legal director for Just Futures Law, an organization representing plaintiffs in a California suit against the company. Zota said the agreement “legitimizes” Clearview.

“It does not address the root of the problem,” Zota said. “Clearview gets to continue its practice of harvesting and selling people’s faces without their consent, and using them to train its AI tech.”

© 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.

Citation:
Facial recognition startup Clearview AI settles privacy suit (2024, June 22)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-facial-recognition-startup-clearview-ai.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

NASA’s laser relay system sends pet imagery to and from Space Station

0
NASA's laser relay system sends pet imagery to and from Space Station


NASA’s Laser Relay System Sends Pet Imagery to, from Space Station
A collage of the pet photos sent over laser links from Earth to LCRD (Laser Communications Relay Demonstration) to ILLUMA-T (Integrated LCRD Low Earth Orbit User Modem and Amplifier Terminal) on the space station. Animals submitted include cats, dogs, birds, chickens, cows, snakes, pigs, and more. Credit: NASA/Dave Ryan

Using NASA’s first two-way, end-to-end laser relay system, pictures and videos of cherished pets flew through space over laser communications links at a rate of 1.2 gigabits per second—faster than most home internet speeds.

NASA astronauts Randy Bresnik, Christina Koch, and Kjell Lindgren, along with other agency employees, submitted photos and videos of their pets to take a trip to and from the International Space Station.

The transmissions allowed NASA’s SCaN (Space Communications and Navigation) program to showcase the power of laser communications while simultaneously testing out a new networking technique.

“The pet imagery campaign has been rewarding on multiple fronts for the ILLUMA-T, LCRD, and HDTN teams,” said Kevin Coggins, deputy associate administrator and SCaN program manager at NASA Headquarters in Washington. “Not only have they demonstrated how these technologies can play an essential role in enabling NASA’s future science and exploration missions, it also provided a fun opportunity for the teams to ‘picture’ their pets assisting with this innovative demonstration.”

This demonstration was inspired by “Taters the Cat”—an orange cat whose video was transmitted 19 million miles over laser links to the DSOC (Deep Space Optical Communications) payload on the Psyche mission. LCRD, DSOC, and ILLUMA-T are three of NASA’s ongoing laser communications demonstrations to prove out the technology’s viability.







NASA sends pet photos and videos to space over laser links during a May 2024 demonstration. Credit: NASA/Dave Ryan

The images and videos started on a computer at a mission operations center in Las Cruces, New Mexico. From there, NASA routed the data to optical ground stations in California and Hawaii. Teams modulated the data onto infrared light signals, or lasers, and sent the signals to NASA’s LCRD (Laser Communications Relay Demonstration) located 22,000 miles above Earth in geosynchronous orbit. LCRD then relayed the data to ILLUMA-T (Integrated LCRD Low Earth Orbit User Modem and Amplifier Terminal), a payload currently mounted on the outside of the space station.

Since the beginning of space exploration, NASA missions have relied on radio frequency communications to send data to and from space. Laser communications, also known as optical communications, employ infrared light instead of radio waves to send and receive information.

While both infrared and radio travel at the speed of light, infrared light can transfer more data in a single link, making it more efficient for science data transfer. This is due to infrared light‘s tighter wavelength, which can pack more information onto a signal than radio communications.

NASA’s Laser Relay System Sends Pet Imagery to, from Space Station
A graphical representation of NASA communicating in space using High-Rate Delay Tolerant Networking. Credit: NASA/Morgan Johnson

This demonstration also allowed NASA to test out another networking technique. When data is transmitted across thousands and even millions of miles in space, the delay and potential for disruption or data loss is significant. To overcome this, NASA developed a suite of communications networking protocols called Delay/Disruption Tolerant Networking, or DTN. The “store-and-forward” process used by DTN allows data to be forwarded as it is received or stored for future transmission if signals become disrupted in space.

To enable DTN at higher data rates, a team at NASA’s Glenn Research Center in Cleveland developed an advanced implementation, HDTN (High-Rate Delay Tolerant Networking). This networking technology acts as a high-speed path for moving data between spacecraft and across communication systems, enabling data transfer at a speed of up to four times faster than current DTN technology—allowing high-speed laser communication systems to utilize the “store-and-forward” capability of DTN.

The HDTN implementation aggregates data from a variety of different sources, like discoveries from the scientific instrumentation on the space station, and prepares the data for transmission back to Earth. For the pet photo and video experiment, the content was routed using DTN protocols as they traveled from Earth to LCRD, to ILLUMA-T on the space station. Once they arrived, an onboard HDTN payload demonstrated its ability to receive and reassemble the data into files.

  • NASA’s Laser Relay System Sends Pet Imagery to, from Space Station
    A collage of the pet photos sent over laser links from Earth to LCRD (Laser Communications Relay Demonstration) to ILLUMA-T (Integrated LCRD Low Earth Orbit User Modem and Amplifier Terminal) on the space station. Animals submitted include cats, dogs, birds, chickens, cows, snakes, pigs, and more. Credit: NASA/Molly Kearns
  • NASA’s Laser Relay System Sends Pet Imagery to, from Space Station
    The benefits of laser communications: more efficient, lighter systems, increased security, and more flexible ground systems. Credit: NASA/Dave Ryan

This optimized implementation of DTN technology aims to enable a variety of communications services for NASA, from improving security through encryption and authentication to providing network routing of 4K high-definition multimedia and more. All of these capabilities are being tested on the space station with ILLUMA-T and LCRD.

As NASA’s Artemis campaign prepares to establish a sustainable presence on and around the moon, SCaN will continue to develop ground-breaking communications technology to bring the scalability, reliability, and performance of the Earth-based internet to space.

Citation:
NASA’s laser relay system sends pet imagery to and from Space Station (2024, June 11)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-nasa-laser-relay-pet-imagery.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Can AI improve soccer teams’ success from corner kicks? Liverpool and others are betting it can

0
Can AI improve soccer teams' success from corner kicks? Liverpool and others are betting it can


Can AI improve football teams’ success from corner kicks? Liverpool and others are betting it can
Credit: Google DeepMind

Last Sunday, Liverpool faced Manchester United in the quarter finals of the FA Cup—and in the final minute of extra time, with the score tied at three-all, Liverpool had the crucial opportunity of a corner kick. A goal would surely mean victory, but losing possession could be risky.

What was Liverpool to do? Attack or play it safe? And if they were to attack, how best to do it? What kind of delivery, and where should players be waiting to attack the ball?

Set-piece decisions like this are vital not only in soccer but in many other competitive sports, and traditionally they are made by coaches on the basis of long experience and analysis. However, Liverpool has recently been looking to an unexpected source for advice: researchers at the Google-owned UK-based artificial intelligence (AI) lab DeepMind.

In a paper published March 19 in Nature Communications, DeepMind researchers describe an AI system for soccer tactics called TacticAI, which can assist in developing successful corner kick routines. The paper says experts at Liverpool favored TacticAI’s advice over existing tactics in 90% of cases.

What TacticAI can do

At a corner kick, play stops and each team has the chance to organize its players on the field before the attacking team kicks the ball back into play—usually with a specific prearranged plan in mind that will (hopefully) let them score a goal. Advice on these prearranged plans or routines is what TacticAI sets out to offer.

Can AI improve football teams’ success from corner kicks? Liverpool and others are betting it can
TacticAI represents a corner-kick setup as a ‘graph’ of player positions and relationships, which it then uses to make predictions. Credit: Wang et al. / Nature Communications

The package has three components: one that predicts which player is most likely to receive the ball in a given scenario, another that predicts whether a shot on goal will be taken, and a third that recommends how to adjust the position of players to increase or decrease the chances of a shot on goal.

Trained on a dataset of 7,176 corner kicks from Premier League matches, TacticAI used a technique called “geometric deep learning” to identify key strategic patterns.

The researchers say this approach could be applied not only to soccer, but to any sport in which a stoppage in the game allows teams to deliberately maneuver players into place unopposed, and plan the next sequence of play. In soccer, it could also be expanded in future to incorporate throw-in routines as well as other set pieces such as attacking free kicks.

Vast amounts of data

AI in soccer is not new. Even in amateur and semi-professional soccer, AI-powered auto-tracking camera systems are becoming commonplace, for example. At the last men’s and women’s World Cups in 2022 and 2023, AI in conjunction with advanced ball-tracking technology produced semi-automated offside decisions with an unprecedented level of accuracy.

Professional soccer clubs have analytical departments using AI at every level of the game, predominantly in the areas of scouting, recruitment and athlete monitoring. Other research has also tried to predict players’ shots on goal, or guess from a video what off-screen players are doing.

Bringing AI into tactical decisions promises to offer coaches a more objective and analytical approach to the game. Algorithms can process vast amounts of data, identifying patterns that may not be apparent to the naked eye, giving teams valuable insights into their own performance as well as that of their opponents.

A useful tool

AI may be a useful tool, but it cannot make decisions about match play alone. An algorithm might suggest the optimal positional setup for an in-swinging corner or how best to exploit the opposition’s defensive tactics.

What AI cannot do is make decisions on the fly—like deciding whether to take a corner quickly to exploit an opponent’s lapse in concentration.






Sometimes the best move is a speedy reaction to conditions on the ground, not an elaborate prearranged set play.

There’s also something to be said for allowing players creative license in some situations. Once teams are using AI to suggest the optimal corner strategy, opponents will doubtless counter with their own AI-prompted defensive setup.

So while the tech behind TacticAI is very interesting, it remains to be seen whether it can evolve to be useful in open play. Could AI get to the stage where it can recognize the best tactical player substitution in a given situation?

DeepMind researchers have advanced decision-making like this in their sights for future research, but will it ever reach a point where coaches would trust it?

My sense from discussions with people in the industry is many believe AI should only be used as an input to decision-making, and not be allowed to make decisions itself. There is no substitute for the experience and instinct of the best coaches, the intangible ability to feel what the game needs, to make a change in formation, to play someone out of position.

Smart tactics, but what about strategy?

Coming back to that crucial Liverpool corner in last Sunday’s FA Cup quarter final: we don’t know whether Liverpool’s manager Jürgen Klopp considered AI advice, but the decision was made to play an attacking corner kick, presumably in the hope of scoring a last-minute winner.

The out-swinging delivery into the box may well have been the tactic with the highest probability of scoring a goal—but things rapidly went wrong. Manchester United gained possession of the ball, moved it down the pitch on the counterattack and slotted home the winning goal, sending Liverpool out of the tournament at the last moment.

So while AI might suggest the optimal delivery and setup for a set piece, a coach might decide the wiser move is to play safe and avoid the risk of a counterattack. If TacticAI continues its career progression as a coaching assistant, it will no doubt learn that keeping the ball in the corner and playing for penalties may sometimes be the better option.

More information:
Zhe Wang et al, TacticAI: an AI assistant for football tactics, Nature Communications (2024). DOI: 10.1038/s41467-024-45965-x

Provided by
The Conversation


This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
Can AI improve soccer teams’ success from corner kicks? Liverpool and others are betting it can (2024, March 20)
retrieved 24 June 2024
from https://techxplore.com/news/2024-03-ai-soccer-teams-success-corner.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Virtual reality as a reliable shooting performance-tracking tool

0
Virtual reality as a reliable shooting performance-tracking tool


target practice
Credit: Pixabay/CC0 Public Domain

Virtual reality technology can do more than teach weaponry skills in law enforcement and military personnel, a new study suggests: It can accurately record shooting performance and reliably track individuals’ progress over time.

In the study of 30 people with a range of experience levels in handling a rifle, researchers at The Ohio State University found that a ballistic simulator captured data on the shooters’ accuracy, decision-making and reaction time—down to the millimeter in distance and millisecond in time—on a consistent basis.

In addition to confirming that the simulator—called the VirTra V-100—is a dependable research tool, the findings could lead to establishing the first-ever standardized performance scores for virtual reality ballistics training.

“To our knowledge, we’re the first team to answer the question of whether the simulator could be converted to an assessment tool and if it’s credible to use it day-to-day,” said Alex Buga, first author of the study and a Ph.D. student in kinesiology at Ohio State.

“We’ve figured out how to export the data and interpret it. We’ve focused on the three big challenges of marksmanship, decision-making and reaction time to measure 21 relevant variables—allowing us to put a report in a user’s hand and say, ‘This is how accurate, precise, focused and fast you are.'”

The study was published in the Journal of Strength and Conditioning Research.

U.S. military leaders and law enforcement agencies have shown an interest in increasing the use of virtual reality for performance assessment, said Buga and senior study author Jeff Volek, professor of human sciences at Ohio State. Earlier this year, an Ohio Attorney General Task Force on the Future of Police Training in Ohio recommended incorporating virtual reality technology into training protocols.

Volek is the principal investigator on a project focused on improving the health of military service members, veterans and the American public. As part of that initiative, the research team is investigating the extent to which nutritional ketosis reduces detrimental effects of sleep loss on cognitive and physical performance in ROTC cadets—including their shooting ability as measured by the VirTra simulator. Verifying the simulator’s results for research purposes triggered the attempt to extract and analyze its data.

“We were using it as an outcome variable for research, and we found that it has very good day-to-day reproducibility of performance, which is crucial for research,” Volek said. “You want a sensitive and reproducible outcome in your test where there’s not a lot of device or equipment variation.”

Because the lab also focuses on human performance in first responders, researchers’ conversations with military and law enforcement communities convinced Buga that data collected by the simulator could be more broadly useful.

“I created a few programs that enabled us to calculate the shooting data and produce objective training measures,” he said. “This equipment is close to what the military and police use every day, so this has potential to be used as a screening tool across the country.”

Users of the simulator operate the infrared-guided M4 rifle by shooting at a large screen onto which different digitally generated visuals are projected—no headset required. The rifle at Ohio State has been retrofitted to produce the same recoil as a police or military weapon.

The study participants included civilians, police and SWAT officers, and ROTC cadets. Each was first familiarized in a single learning session with the simulator and then completed multiple rounds of three different tasks in each of three study performance sessions.

In the first task, participants fired at the same target a total of 50 times to produce measures of shooting precision. The decision-making assessment involved shooting twice within two seconds at designated shapes and colors on a screen displaying multiple shape and color choices. In the reaction-time scenario, participants shot at a series of plates from left to right as rapidly as possible.

Internal consistency ratings showed the simulator generated good to excellent test-retest agreement on the 21 variables measured.

All participants were well-rested and completed the study sessions at about the same time of day. Self-evaluations showed that participants’ overall confidence about their shooting performance increased from their first to final sessions. They also rated the simulator as a realistic and a low-stress shooting assessment tool.

The low stress and well-rested conditions were important to establishing baseline performance measures, the researchers noted, which then would enable evaluating how injuries and other physical demands of first-responder professions affect shooting performance.

“This simulator could be used to assess the effectiveness of specific training programs designed to improve shooting performance, or to evaluate marksmanship in response to various stressors encountered by the same law enforcement and military personnel,” Buga said. “These novel lines of evidence have enabled us to push the boundaries of tactical research and set the groundwork for using virtual reality in sophisticated training scenarios that support national defense goals.”

Additional co-authors, all from Ohio State, included Drew Decker, Bradley Robinson, Christopher Crabtree, Justen Stoner, Lucas Arce, Xavier El-Shazly, Madison Kackley, Teryn Sapper, John Paul Anders and William Kraemer.

More information:
Alex Buga et al, The VirTra V-100 Is a Test-Retest Reliable Shooting Simulator for Measuring Accuracy/Precision, Decision-Making, and Reaction Time in Civilians, Police/SWAT, and Military Personnel, Journal of Strength & Conditioning Research (2024). DOI: 10.1519/JSC.0000000000004875

Citation:
Virtual reality as a reliable shooting performance-tracking tool (2024, June 11)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-virtual-reality-reliable-tracking-tool.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link