Tuesday, November 26, 2024
Home Blog Page 1093

Wireless receiver blocks interference for better mobile device performance

0
Wireless receiver blocks interference for better mobile device performance


mobile device
Credit: Pixabay/CC0 Public Domain

The growing prevalence of high-speed wireless communication devices, from 5G mobile phones to sensors for autonomous vehicles, is leading to increasingly crowded airwaves. This makes the ability to block interfering signals that can hamper device performance an even more important—and more challenging—problem.

With these and other emerging applications in mind, MIT researchers demonstrated a new millimeter-wave multiple-input-multiple-output (MIMO) wireless receiver architecture that can handle stronger spatial interference than previous designs. MIMO systems have multiple antennas, enabling them to transmit and receive signals from different directions. Their wireless receiver senses and blocks spatial interference at the earliest opportunity, before unwanted signals have been amplified, which improves performance.

Key to this MIMO receiver architecture is a special circuit that can target and cancel out unwanted signals, known as a nonreciprocal phase shifter. By making a novel phase shifter structure that is reconfigurable, low-power, and compact, the researchers show how it can be used to cancel out interference earlier in the receiver chain.

Their receiver can block up to four times more interference than some similar devices. In addition, the interference-blocking components can be switched on and off as needed to conserve energy.

In a mobile phone, such a receiver could help mitigate signal quality issues that can lead to slow and choppy Zoom calling or video streaming.

“There is already a lot of utilization happening in the frequency ranges we are trying to use for new 5G and 6G systems. So, anything new we are trying to add should already have these interference-mitigation systems installed. Here, we’ve shown that using a nonreciprocal phase shifter in this new architecture gives us better performance.

“This is quite significant, especially since we are using the same integrated platform as everyone else,” says Negar Reiskarimian, the X-Window Consortium Career Development Assistant Professor in the Department of Electrical Engineering and Computer Science (EECS), a member of the Microsystems Technology Laboratories and Research Laboratory of Electronics (RLE), and the senior author of a paper on this receiver.

Reiskarimian wrote the paper with EECS graduate students Shahabeddin Mohin, who is the lead author, Soroush Araei, and Mohammad Barzgari, an RLE postdoc. The work was recently presented at the IEEE Radio Frequency Circuits Symposium and received the Best Student Paper Award.

Blocking interference

Digital MIMO systems have an analog and a digital portion. The analog portion uses antennas to receive signals, which are amplified, down-converted, and passed through an analog-to-digital converter before being processed in the digital domain of the device. In this case, digital beamforming is required to retrieve the desired signal.

But if a strong, interfering signal coming from a different direction hits the receiver at the same time as a desired signal, it can saturate the amplifier so the desired signal is drowned out. Digital MIMOs can filter out unwanted signals, but this filtering occurs later in the receiver chain. If the interference is amplified along with the desired signal, it is more difficult to filter out later.

“The output of the initial low-noise amplifier is the first place you can do this filtering with minimal penalty, so that is exactly what we are doing with our approach,” Reiskarimian says.

The researchers built and installed four nonreciprocal phase shifters immediately at the output of the first amplifier in each receiver chain, all connected to the same node. These phase shifters can pass signal in both directions and sense the angle of an incoming interfering signal. The devices can adjust their phase until they cancel out the interference.

The phase of these devices can be precisely tuned, so they can sense and cancel an unwanted signal before it passes to the rest of the receiver, blocking interference before it affects any other parts of the receiver. In addition, the phase shifters can follow signals to continue blocking interference if it changes location.

“If you start getting disconnected or your signal quality goes down, you can turn this on and mitigate that interference on the fly. Because ours is a parallel approach, you can turn it on and off with minimal effect on the performance of the receiver itself,” Reiskarimian adds.

A compact device

In addition to making their novel phase shifter architecture tunable, the researchers designed them to use less space on the chip and consume less power than typical nonreciprocal phase shifters.

Once the researchers had done the analysis to show their idea would work, their biggest challenge was translating the theory into a circuit that achieved their performance goals. At the same time, the receiver had to meet strict size restrictions and a tight power budget, or it wouldn’t be useful in real-world devices.

In the end, the team demonstrated a compact MIMO architecture on a 3.2-square-millimeter chip that could block signals which were up to four times stronger than what other devices could handle. Simpler than typical designs, their phase shifter architecture is also more energy efficient.

Moving forward, the researchers want to scale up their device to larger systems, as well as enable it to perform in the new frequency ranges utilized by 6G wireless devices. These frequency ranges are prone to powerful interference from satellites. In addition, they would like to adapt nonreciprocal phase shifters to other applications.

More information:
Mohin et al. A Blocker-Tolerant mm-Wave MIMO Receiver with Spatial Notch Filtering Using Non-Reciprocal Phase-Shifters for 5G Applications. (2024). radiuslab.mit.edu/RMo1A-4-PDF.pdf

This story is republished courtesy of MIT News (web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and teaching.

Citation:
Wireless receiver blocks interference for better mobile device performance (2024, June 27)
retrieved 27 June 2024
from https://techxplore.com/news/2024-06-wireless-blocks-mobile-device.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Magnesium-18’s unique decay process: From theory to practice

0
Magnesium-18’s unique decay process: From theory to practice


Magnesium-18's unique decay process: From theory to practice
In contrast to classical physics, multi-particle decay is a phenomenon unique to the quantum world. Magnesium-18 exemplifies such an exotic system, positioned far from the dripline. Spitting out proton-pair one after another, unbound Magnesium-18 decays to Neon-16 and then to Oxygen-14—a chain of events that physicists liken to nesting dolls. The complete study is accessible via DOI: 10.1007/s41365-024-01479-1. Credit: Simin Wang

Led by physicist Si-Min Wang, the research team at Key Laboratory of Nuclear Physics and Ion-beam Application (MOE), Institute of Modern Physics, and Shanghai Research Center for Theoretical Nuclear Physics, NSFC, Fudan University, has documented that 18Mg undergoes a unique “multi-proton” decay mechanism, where it sequentially emits two-proton pairs. This process, differing from traditional radioactive decay, underscores a complex interaction of nuclear forces, diverging from long-held nuclear models.

The phenomenon of two-proton decay, predicted in the 1960s, has transitioned from a theoretical curiosity to a measurable reality thanks to recent advances in technology. The study of this unusual behavior in 18Mg, a nuclide far from the typical line of nuclear stability, offers crucial insights into the forces at play within highly unstable nuclei loaded with protons.

Utilizing the Gamow-coupled-channel method, researchers have captured and analyzed the interactions and decay within 18Mg. This approach provides a detailed view of the nuclear structural dynamics, allowing for a more nuanced understanding of atomic behavior during decay. “Our method has significantly improved how we interpret the interaction between protons and the nucleus during the decay in extreme conditions,” explains Professor Wang.

The study not only expands academic understanding but also has practical ramifications in various fields. Knowledge about the decay behavior of exotic nuclei like 18Mg could influence advancements in fundamental interactions, energy sectors, and all kinds of open quantum systems.

  • Magnesium-18's unique decay process: From theory to practice
    The calculated spectra for 18Mg and 18C suggest that the ground state of 18Mg might exhibit a “democratic” decay mode. This implies a potential competition between single-proton (1p) and two-proton (2p) decay pathways during the decay process of 18Mg. The complete study is accessible via DOI: 10.1007/s41365-024-01479-1. Credit: Simin Wang
  • Magnesium-18's unique decay process: From theory to practice
    Using an advanced theoretical model, the decay process of 18Mg has been elucidated. It indicates that simultaneous two-proton emission is the more probable decay mode, despite the energetic feasibility of single-proton decay. The complete study is accessible via DOI: 10.1007/s41365-024-01479-1. Credit: Simin Wang

The team is now set to explore further how the deformation of the nucleus influences decay processes. Future research aims to delve into the relationships between nucleon structures and decay mechanisms, with the potential to refine theoretical models in nuclear physics extensively.

“By converting what was once theoretical into something we can now study and quantify, this research enhances both our fundamental understanding of nuclear physics and our ability to apply this knowledge in practical ways,” stated Professor Wang. “Each discovery provides not only new academic insights but also practical solutions that may benefit various technological fields in the future.”

The work is published in the journal Nuclear Science and Techniques.

More information:
Long Zhou et al, Structure and 2p decay mechanism of 18Mg, Nuclear Science and Techniques (2024). DOI: 10.1007/s41365-024-01479-1

Provided by
Nuclear Science and Techniques

Citation:
Magnesium-18’s unique decay process: From theory to practice (2024, June 27)
retrieved 27 June 2024
from https://phys.org/news/2024-06-magnesium-unique-decay-theory.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Using AI to save the Tasmanian devil

0
Using AI to save the Tasmanian devil


Using AI to save the Tasmanian devil
Credit: Tasmanian Land Conservancy

Scientists at the University of Tasmania are using groundbreaking artificial intelligence (AI) technology to tackle the spread of Devil Facial Tumor 2 (DFT2).

This innovative project, led by Dr. Rodrigo Hamede and Professor Barry Brook at the School of Natural Sciences, aims to transform how scientists monitor and manage wildlife diseases. With potential applications extending beyond Tasmanian devils, it could revolutionize wildlife disease management globally.

DFT2, the second transmissible cancer affecting Tasmanian devils, was discovered in 2014 near Cygnet on the D’Entrecasteaux Peninsula. The disease has steadily spread across southeastern Tasmania. In November 2022, the disease was detected for the first time outside of the peninsula, raising concerns about its accelerating spread.

To combat this, the project combines data from remote cameras and AI software to process thousands of images and identify diseased individuals in real time.

“This technology is a game-changer,” Dr. Hamede said. Our AI software can rapidly process images of Tasmanian devils captured by the cameras through a three-step process.

“The AI first separates animal images from blanks, then determines the species, and finally distinguishes between healthy devils and those with tumors. Using advanced computer-vision techniques, we can monitor the disease’s progression much faster than human labeling, without compromising accuracy.”

The insights from this project will help inform timely interventions and could serve as a model for tackling other wildlife diseases.

“Our approach merges traditional methods with advanced sensor-based monitoring and AI technology,” said Professor Brook.

“This project could significantly change how we manage wildlife diseases, both in Tasmania and around the world. The use of AI allows for more responsive detections and interventions by eliminating the time lag caused when experts need to manually process all the images.”

A key part of the project is involving local landowners and community members. By working together with local councils, government and non-government organizations, and existing schemes like the Land for Wildlife Scheme and Tasmanian Land Conservancy, the project aims to create a community-based monitoring network.

“Community support is vital. By working together, we can make a real difference in managing wild devil populations affected by the disease,” emphasized Dr. Hamede.

“We are calling for landowners from the Huon Valley and Derwent Valley to sign up for our project so we can deploy cameras on their properties.”

“The more people sign up for our project, the better we can monitor DFT2 spread and effects. Their participation provides valuable data, raises awareness, and fosters a collective effort to combat DFT2.”

This new methodology is set to become the standard approach to monitor devil populations and DFTD infection dynamics across Tasmania. It will improve our capacity to assess and deliver relevant conservation strategies in a cost-effective and time-efficient manner.

Citation:
Using AI to save the Tasmanian devil (2024, June 27)
retrieved 27 June 2024
from https://phys.org/news/2024-06-ai-tasmanian-devil.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

AI companies train language models on YouTube’s archive—making family-and-friends videos a privacy risk

0
AI companies train language models on YouTube’s archive—making family-and-friends videos a privacy risk


AI companies train language models on YouTube's archive—making family-and-friends videos a privacy risk
Credit: The Conversation

The promised artificial intelligence revolution requires data. Lots and lots of data. OpenAI and Google have begun using YouTube videos to train their text-based AI models. But what does the YouTube archive actually include?

Our team of digital media researchers at the University of Massachusetts Amherst collected and analyzed random samples of YouTube videos to learn more about that archive. We published an 85-page paper about that dataset and set up a website called TubeStats for researchers and journalists who need basic information about YouTube.

Now, we’re taking a closer look at some of our more surprising findings to better understand how these obscure videos might become part of powerful AI systems. We’ve found that many YouTube videos are meant for personal use or for small groups of people, and a significant proportion were created by children who appear to be under 13.

Bulk of the YouTube iceberg

Most people’s experience of YouTube is algorithmically curated: Up to 70% of the videos users watch are recommended by the site’s algorithms. Recommended videos are typically popular content such as influencer stunts, news clips, explainer videos, travel vlogs and video game reviews, while content that is not recommended languishes in obscurity.

Some YouTube content emulates popular creators or fits into established genres, but much of it is personal: family celebrations, selfies set to music, homework assignments, video game clips without context and kids dancing. The obscure side of YouTube—the vast majority of the estimated 14.8 billion videos created and uploaded to the platform—is poorly understood.

Illuminating this aspect of YouTube—and social media generally—is difficult because big tech companies have become increasingly hostile to researchers.

We’ve found that many videos on YouTube were never meant to be shared widely. We documented thousands of short, personal videos that have few views but high engagement—likes and comments—implying a small but highly engaged audience. These were clearly meant for a small audience of friends and family. Such social uses of YouTube contrast with videos that try to maximize their audience, suggesting another way to use YouTube: as a video-centered social network for small groups.

Other videos seem intended for a different kind of small, fixed audience: recorded classes from pandemic-era virtual instruction, school board meetings and work meetings. While not what most people think of as social uses, they likewise imply that their creators have a different expectation about the audience for the videos than creators of the kind of content people see in their recommendations.

Fuel for the AI machine

It was with this broader understanding that we read The New York Times exposé on how OpenAI and Google turned to YouTube in a race to find new troves of data to train their large language models. An archive of YouTube transcripts makes an extraordinary dataset for text-based models.

There is also speculation, fueled in part by an evasive answer from OpenAI’s chief technology officer Mira Murati, that the videos themselves could be used to train AI text-to-video models such as OpenAI’s Sora.

The New York Times story raised concerns about YouTube’s terms of service and, of course, the copyright issues that pervade much of the debate about AI. But there’s another problem: How could anyone know what an archive of more than 14 billion videos, uploaded by people all over the world, actually contains? It’s not entirely clear that Google knows or even could know if it wanted to.

Kids as content creators

We were surprised to find an unsettling number of videos featuring kids or apparently created by them. YouTube requires uploaders to be at least 13 years old, but we frequently saw children who appeared to be much younger than that, typically dancing, singing or playing video games.

In our preliminary research, our coders determined nearly a fifth of random videos with at least one person’s face visible likely included someone under 13. We didn’t take into account videos that were clearly shot with the consent of a parent or guardian.

Our current sample size of 250 is relatively small—we are working on coding a much larger sample—but the findings thus far are consistent with what we’ve seen in the past. We’re not aiming to scold Google. Age validation on the internet is infamously difficult and fraught, and we have no way of determining whether these videos were uploaded with the consent of a parent or guardian. But we want to underscore what is being ingested by these large companies’ AI models.

Small reach, big influence

It’s tempting to assume OpenAI is using highly produced influencer videos or TV newscasts posted to the platform to train its models, but previous research on large language model training data shows that the most popular content is not always the most influential in training AI models. A virtually unwatched conversation between three friends could have much more linguistic value in training a chatbot language model than a music video with millions of views.

Unfortunately, OpenAI and other AI companies are quite opaque about their training materials: They don’t specify what goes in and what doesn’t. Most of the time, researchers can infer problems with training data through biases in AI systems’ output. But when we do get a glimpse at training data, there’s often cause for concern. For example, Human Rights Watch released a report on June 10, 2024, that showed that a popular training dataset includes many photos of identifiable kids.

The history of big tech self-regulation is filled with moving goal posts. OpenAI in particular is notorious for asking for forgiveness rather than permission and has faced increasing criticism for putting profit over safety.

Concerns over the use of user-generated content for training AI models typically center on intellectual property, but there are also privacy issues. YouTube is a vast, unwieldy archive, impossible to fully review.

Models trained on a subset of professionally produced videos could conceivably be an AI company’s first training corpus. But without strong policies in place, any company that ingests more than the popular tip of the iceberg is likely including content that violates the Federal Trade Commission’s Children’s Online Privacy Protection Rule, which prevents companies from collecting data from children under 13 without notice.

With last year’s executive order on AI and at least one promising proposal on the table for comprehensive privacy legislation, there are signs that legal protections for user data in the U.S. might become more robust.






When the Wall Street Journal’s Joanna Stern asked OpenAI CTO Mira Murati whether OpenAI trained its text-to-video generator Sora on YouTube videos, she said she wasn’t sure.

Have you unwittingly helped train ChatGPT?

The intentions of a YouTube uploader simply aren’t as consistent or predictable as those of someone publishing a book, writing an article for a magazine or displaying a painting in a gallery. But even if YouTube’s algorithm ignores your upload and it never gets more than a couple of views, it may be used to train models like ChatGPT and Gemini.

As far as AI is concerned, your family reunion video may be just as important as those uploaded by influencer giant Mr. Beast or CNN.

Provided by
The Conversation


This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
AI companies train language models on YouTube’s archive—making family-and-friends videos a privacy risk (2024, June 27)
retrieved 27 June 2024
from https://techxplore.com/news/2024-06-ai-companies-language-youtube-archive.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

An open-source generalist model for robot object manipulation

0
An open-source generalist model for robot object manipulation


An open-source generalist model for robot object manipulation
These are the robots we tested Octo on – you can see that there is a wide range of different robot arms, from small to large, single arm to bimanual. Octo was able to control all these robots. Credit: Team et al.

The public release of ChatGPT and other large language models (LLMs) has allowed developers worldwide to start experimenting with these models to enhance the interactive capabilities of their own systems. Similar generalizable models for robotic manipulation, however, remain scarce.

Researchers at University of California, Berkeley (UC Berkeley), Stanford University and CMU recently introduced Octo, an open-source generalist model for robotic manipulation that could allow different robotic systems to effectively manipulate a wide range of objects. This model, presented in a paper pre-published on the server arXiv, could open new avenues for the development of robots that can tackle manual tasks.

“Much of the current progress in AI is driven by large datasets and large models,” Dibya Ghosh, Homer Walke, Karl Pertsch, Kevin Black and Oier Mees, told Tech Xplore. “In the robotics community, we recently assembled the Open X-Embodiment dataset, a big manipulation dataset that pools data from many research institutions. While this new dataset is a really exciting resource, at the time there weren’t many models that could make use of it yet.”

The recent work by this research team had two main objectives. The first was to develop a good generalist robotics model that could be applied to various robots and the second was to create open-source code that would allow other researchers to build similar models in the future.







“Octo is what we call a ‘generalist’ robot model, a neural network that can control many different types of robots and make them fulfill requests like ‘pick up the spoon,’ ‘close the drawer,’ ‘wipe the table’ etc.,” Ghosh, Walke, Pertsch, Black and Mees explained.

“Being a generalist and working on many robots is key, because if you look at research labs around the world, many of them use different robots, so the only way to ensure Octo can be used by many researchers is by supporting a wide range of robots.”

Within the technology research and development community, highly performing computational tools that can be applied across multiple systems are often referred to as foundational models. An example of these models is ChatGPT, which can be used to equip various agents and systems with natural language processing (NLP) capabilities.

“We want to build similar foundation models, but for robot control, or in other words, models that can control many robots and make them solve many different tasks,” Ghosh, Walke, Pertsch, Black and Mees said.

“Octo is a first step towards that goal. Its training looks very similar to models like ChatGPT: we curate a large and diverse dataset, in our case robot data instead of text, and train a large model to predict the next action the robot should execute given the current robot state and a task instruction.”







Octo, the model developed by Ghosh, Walke, Pertsch, Black and Mees is based on the same type of neural networks as ChatGPT, known as transformers. A key advantage of Octo over other previously developed robotics models is the scale of the data used to train it and its flexibility.

The model was trained on the largest dataset of robotic manipulation trajectories compiled to date; the Open X-Embodiment dataset. Octo can also process a diverse range of sensory inputs including different types of images, robot joint readings, language instructions, goal-related images and more.

“Octo can also control many different types of robot arms, from small single arms that can barely pick up a soda can, to larger, more powerful robot arms and even bi-manual setups,” Ghosh, Walke, Pertsch, Black and Mees said. “This flexibility is what makes Octo more applicable to the diverse setups roboticists actually have around the world.”







The researchers evaluated their model in a series of initial experiments, deploying it on nine different robotic systems developed at UC Berkely, Stanford and CMU. Octo succeeded in controlling these robots and allowed them to complete various manipulation tasks, even in instances where it had not encountered data collected by these robots’ sensors or their unique design during training.

“It was really cool to see that we can take our Octo model and use it to control many different robots,” the researchers said. “Since we released the model, we saw quite a few people who tried running it on their own robots and we have been using the codebase we built for Octo in our next projects as well. These are some encouraging signs that Octo will indeed help foster the next generation of improved foundation models for robotics.”

For the researchers, the development of Octo was merely a small milestone towards their goal of building a generalist model for robotic manipulation. In their next studies, they plan to continue working towards this goal and hope that research groups at other institutes will also start experimenting with their code.

An open-source generalist model for robot object manipulation
Part of the Octo model team when we were running robot experiments late at night before the model release (Left to right: Oier Mees, Dibya Ghosh, Homer Walke, Karl Pertsch, Lawrence Chen). Octo was a big team effort between multiple research labs from Berkeley, Stanford and CMU. Work on foundation models in robotics is hard, with many many hours spent evaluating models on all different types of robots, so having many helping hands is a necessity. Credit: Team et al.

“Right now, chances are that the model will not work on your robot out of the box and you need to collect a few examples of the task you want your robot to solve to teach it to Octo, even if it’s a mundane task like picking up a coke can in a new kitchen,” they added.

“This is to say, the generalization ability of the current model is still pretty limited and we’re working on new models that will push this a bit further. We’re not yet at the point where you can just download a model to your robot, tell your robot what you’d like it to do and it will succeed 9 out of 10 times, but we’re working towards this goal.”

More information:
Dibya Ghosh et al, Octo: An Open-Source Generalist Robot Policy, arXiv (2024). DOI: 10.48550/arxiv.2405.12213

Journal information:
arXiv


© 2024 Science X Network

Citation:
An open-source generalist model for robot object manipulation (2024, June 10)
retrieved 27 June 2024
from https://techxplore.com/news/2024-06-source-generalist-robot.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link