Monday, February 24, 2025
Home Blog Page 1092

California’s 2024 Dungeness crab season postponed to protect whales

0
California’s 2024 Dungeness crab season postponed to protect whales


Dungeness Crab
Credit: Unsplash/CC0 Public Domain

For the sixth consecutive year, California officials are delaying the Bay Area’s commercial Dungeness crab season because of the “high abundance” of migrating humpback whales already getting ensnared by old crab-pot fishing lines and other gear—while conservation groups are again expressing frustration with what they say is a too-slow transition to safer fishing methods.

The upcoming season normally would start Nov. 15 in the waters from the Sonoma/Mendocino County line south to the Mexican border.

The decision came down on Friday afternoon from Charlton H. Bonham, director of the California Department of Fish and Wildlife, and was made in consultation with representatives of the fishing industry, environmental organizations and scientists.

According to the nonprofit conservation group Oceana, a participant in those discussions, during the period from May to Oct. 21 of this year, four humpback whales were confirmed entangled in California commercial Dungeness crab fishing gear. Ten others were observed entangled in “unknown fishing gear that may be Dungeness crab gear.”

Other entanglements were sighted the past few days, said marine scientist Caitlynn Birch, Oceana’s campaign manager.

Both she and representatives from the Center for Biological Diversity spoke out in favor of the use of rope-less or pop-up gear to give whales a safe migration south.

“I’m glad state officials are taking precautions to avoid entangling whales in the area, but with viable pop-up gear available, these season delays are getting increasingly archaic,” said Ben Grundy, oceans campaigner at the Center for Biological Diversity, in a statement.

He noted that if the state had authorized the use of pop-up gear—which he said has performed well in tests—”crab fishers could be prepping to put their traps in the water right now.”

Although another assessment will take place on or around Nov. 22, that date will make it impossible for crab to make an appearance on Thanksgiving dinner tables on Nov. 28. That assessment could, however, lead to an opening date in early December.

The recreational Dungeness season will be allowed to start as scheduled on Nov. 2—but with restrictions. Crabbers may not use trap gear, according to the state order; only hoop nets and crab snares will be allowed until further assessment.

Since 2015, there have been delays in all but one commercial Dungeness season in the Bay Area. A toxin, domoic acid, that could sicken anyone who eats the tainted crab destroyed Northern California’s 2015–2016 commercial season and created delays in other years.

In 2018, the commercial season began without a hitch although recreational crabbers had to postpone their fishing.

In 2019 and 2020, the fishing line danger to whales resulted in a crabbing delay of several weeks. The 2020 crabbing season was officially set to begin Dec. 23, but price negotiations between crab fleets and seafood processors delayed the start until early January 2021.

With delays to protect whales, the truncated 2021–22 season ran from Dec. 29 to April 8, and the 2022–23 season from Dec. 31 to April 15. The 2023–24 season didn’t begin until Jan. 18, 2024, and closed early, on April 8.

2024 MediaNews Group, Inc. Distributed by Tribune Content Agency, LLC.

Citation:
California’s 2024 Dungeness crab season postponed to protect whales (2024, October 28)
retrieved 28 October 2024
from https://phys.org/news/2024-10-california-dungeness-crab-season-postponed.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Google Flights introduces new feature to find cheaper airfares

0
Google Flights introduces new feature to find cheaper airfares


transatlantic flight
Credit: Unsplash/CC0 Public Domain

Just in time for the holiday travel frenzy, Google Flights is introducing a new feature designed to help users obtain the lowest airline ticket prices available.

The search engine has created a new tab that will be part of Google Flights search results named “Cheapest” and it will feature a variety of lower-priced flight options.

The fares displayed on the “Cheaper” tab may be based on third-party booking sites that are offering lower prices, for instance, or they may be options that involve flying back to a different airport than you departed from (but still within the same city) in order to score a better deal.

Some of the other cheaper flight options may involve a longer layover, self-transfers or purchasing different legs of the trip through multiple airlines or booking sites.

“When you search with Google Flights, the best options appear at the top of the results, based on a mix of price and convenience” says a Google blog post about the new feature. “But sometimes, there might be cheaper options available for those of you who are willing to give up some convenience for the best deal.”

Google describes the new search results as an option for “those times when cost matters more than convenience.”

“The new tab gives you an easy way to see the lowest prices available and then decide for yourself what trade-offs you want to make,” Google adds.

Whether you’re scoping out next year’s big vacation or planning a trip for the holidays, this upgrade should help you make the most of your travel budget, says Google.

Beginning this week, after entering trip details, travelers can tap on “Cheapest” to show the additional flight options with lower prices. The update will be available globally over the next two weeks.

2024 Northstar Travel Media, LLC. Distributed by Tribune Content Agency, LLC.

Citation:
Google Flights introduces new feature to find cheaper airfares (2024, October 28)
retrieved 28 October 2024
from https://techxplore.com/news/2024-10-google-flights-feature-cheaper-airfares.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Stone Age network reveals ancient Paris was an artisanal trading hub

0


SEI 227464118

Blades and other artefacts were traded across France during the Stone Age

Jacques Descloitres/MODIS Land Rapid Response Team

Around 7000 years ago, long knives, bracelets and other stone goods fashioned by skilled Parisian crafters were reaching people hundreds of kilometres away, via complex trade networks that are now being mapped for the first time.

By combining archaeology with computer modelling, Solène Denis at the French National Centre for Scientific Research in Nanterre and Michael Kempf at the University of Basel in Switzerland have reconstructed the lengthy and winding paths taken to supply people from what is now Normandy…



Source link

Do they work better together or alone?

0
Do they work better together or alone?


Humans and AI: Do they work better together or alone?
Credit: MIT Sloan School of Management

The potential of human-AI collaboration has captured our imagination: a future where human creativity and AI’s analytical power combine to make critical decisions and solve complex problems. But new research from the MIT Center for Collective Intelligence (CCI) suggests this vision may be much more nuanced than we once thought.

Published in Nature Human Behaviour, “When Combinations of Humans and AI Are Useful” is the first large-scale meta-analysis conducted to better understand when human-AI combinations are useful in task completion, and when they are not. Surprisingly, the research has found that combining humans and AI to complete decision-making tasks often fell short; but human-AI teams showed much potential working in combination to perform creative tasks.

The research, conducted by MIT doctoral student and CCI affiliate Michelle Vaccaro, and MIT Sloan School of Management professors Abdullah Almaatouq and Thomas Malone, arrives at a time marked by both excitement and uncertainty about AI’s impact on the workforce.

Instead of focusing on job displacement predictions, Malone said that he and the team wanted to explore questions they believe deserve more attention: When do humans and AI work together most effectively? And how can organizations create guidelines and guardrails to ensure these partnerships succeed?

The researchers conducted a meta-analysis of 370 results on AI and human combinations in a variety of tasks from 106 different experiments published in relevant academic journals and conference proceedings between January 2020 and June 2023.

All the studies compared three different ways of performing tasks: a) human-only systems b) AI-only systems, and c) human-AI collaborations. The overall goal of the meta-analysis was to understand the underlying trends revealed by the combination of the studies.

Test outcomes

The researchers found that on average, human-AI teams performed better than humans working alone, but didn’t surpass the capabilities of AI systems operating on their own.

Importantly, they did not find “human-AI synergy,” which means that the average human-AI systems performed worse than the best of humans alone or AI alone on the performance metrics studied. This suggests that using either humans alone or AI systems alone would have been more effective than the human-AI collaborations studied.

“There’s a prevailing assumption that integrating AI into a process will always help performance—but we show that that isn’t true,” said Vaccaro. “In some cases, it’s beneficial to leave some tasks solely for humans, and some tasks solely for AI.”

The team also identified factors affecting how well humans and AI work together. For instance, for decision-making tasks like classifying deep fakes, forecasting demand, and diagnosing medical cases, human-AI teams often underperformed against AI alone.

However, for many creative tasks, such as summarizing social media posts, answering questions in a chat, or generating new content and imagery, these collaborations were often better than the best of humans or AI working independently.

“Even though AI in recent years has mostly been used to support decision-making by analyzing large amounts of data, some of the most promising opportunities for human-AI combinations now are in supporting the creation of new content, such as text, images, music, and video,” said Malone.

The team theorized that this advantage in creative endeavors stems from their dual nature: While these tasks require human talents like creativity, knowledge, and insight, they also involve repetitive work where AI excels. Designing an image, for instance, requires both artistic inspiration—where humans excel—and detailed execution—where AI often shines.

In a similar vein, writing and generating many kinds of text documents requires human knowledge and insight, but also involves routine and automated processes such as filling in boilerplate text.

“There is a lot of potential in combining humans and AI, but we need to think more critically about it,” said Vaccaro. “The effectiveness is not necessarily about the baseline performance of either of them, but about how they work together and complement each other.”

Optimizing collaboration

The research team believes its findings provide guidance and lessons for organizations looking to bring AI into their workplaces more effectively. For starters, Vaccaro emphasized the importance of assessing whether humans and AI are truly outperforming either humans or AI working independently.

“Many organizations may be overestimating the effectiveness of their current systems,” she added. “They need to get a pulse on how well they’re working.”

Next, they need to evaluate where AI can help workers. The study indicates that AI can be particularly helpful in creative tasks, so organizations should explore what kinds of creative work could be ripe for the insertion of AI.

Finally, organizations need to set clear guidelines and establish robust guardrails for AI usage. They might, for example, devise processes that leverage complementary strengths.

“Let AI handle the background research, pattern recognition, predictions, and data analysis, while harnessing human skills to spot nuances and apply contextual understanding,” Malone suggested. In other words, “Let humans do what they do best.”

Malone concluded, “As we continue to explore the potential of these collaborations, it’s clear that the future lies not just in replacing humans with AI, but also in finding innovative ways for them to work together effectively.”

More information:
When Combinations of Humans and AI Are Useful, Nature Human Behaviour (2024). DOI: 10.1038/s41562-024-02024-1

Citation:
Humans and AI: Do they work better together or alone? (2024, October 28)
retrieved 28 October 2024
from https://techxplore.com/news/2024-10-humans-ai.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

New technique pools diverse data

0
New technique pools diverse data


A faster, better way to train general-purpose robots
Researchers filmed multiple instances of a robotic arm feeding co-author Jialiang Zhao’s adorable dog, Momo. The videos were included in datasets to train the robot. Credit: Massachusetts Institute of Technology

In the classic cartoon “The Jetsons,” Rosie the robotic maid seamlessly switches from vacuuming the house to cooking dinner to taking out the trash. But in real life, training a general-purpose robot remains a major challenge.

Typically, engineers collect data that are specific to a certain robot and task, which they use to train the robot in a controlled environment. However, gathering these data is costly and time-consuming, and the robot will likely struggle to adapt to environments or tasks it hasn’t seen before.

To train better general-purpose robots, MIT researchers developed a versatile technique that combines a huge amount of heterogeneous data from many sources into one system that can teach any robot a wide range of tasks.

Their method involves aligning data from varied domains, like simulations and real robots, and multiple modalities, including vision sensors and robotic arm position encoders, into a shared “language” that a generative AI model can process.

The work is published on the arXiv preprint server.

By combining such an enormous amount of data, this approach can be used to train a robot to perform a variety of tasks without the need to start training it from scratch each time.

This method could be faster and less expensive than traditional techniques because it requires far fewer task-specific data. In addition, it outperformed training from scratch by more than 20% in simulation and real-world experiments.

“In robotics, people often claim that we don’t have enough training data. But in my view, another big problem is that the data come from so many different domains, modalities, and robot hardware. Our work shows how you’d be able to train a robot with all of them put together,” says Lirui Wang, an electrical engineering and computer science (EECS) graduate student and lead author of the paper on this technique.

Wang’s co-authors include fellow EECS graduate student Jialiang Zhao; Xinlei Chen, a research scientist at Meta; and senior author Kaiming He, an associate professor in EECS and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). The research will be presented at the Conference on Neural Information Processing Systems, held 10–15 December at the Vancouver Convention Center.

Inspired by LLMs

A robotic “policy” takes in sensor observations, like camera images or proprioceptive measurements that track the speed and position a robotic arm, and then tells a robot how and where to move.

Policies are typically trained using imitation learning, meaning a human demonstrates actions or teleoperates a robot to generate data, which are fed into an AI model that learns the policy. Because this method uses a small amount of task-specific data, robots often fail when their environment or task changes.

To develop a better approach, Wang and his collaborators drew inspiration from large language models like GPT-4.

These models are pretrained using an enormous amount of diverse language data and then fine-tuned by feeding them a small amount of task-specific data. Pretraining on so much data helps the models adapt to perform well on a variety of tasks.

“In the language domain, the data are all just sentences. In robotics, given all the heterogeneity in the data, if you want to pretrain in a similar manner, we need a different architecture,” he says.

Robotic data take many forms, from camera images to language instructions to depth maps. At the same time, each robot is mechanically unique, with a different number and orientation of arms, grippers, and sensors. Plus, the environments where data are collected vary widely.

The MIT researchers developed a new architecture called Heterogeneous Pretrained Transformers (HPT) that unifies data from these varied modalities and domains.

They put a machine-learning model known as a transformer into the middle of their architecture, which processes vision and proprioception inputs. A transformer is the same type of model that forms the backbone of large language models.

The researchers align data from vision and proprioception into the same type of input, called a token, which the transformer can process. Each input is represented with the same fixed number of tokens.

Then the transformer maps all inputs into one shared space, growing into a huge, pretrained model as it processes and learns from more data. The larger the transformer becomes, the better it will perform.

A user only needs to feed HPT a small amount of data on their robot’s design, setup, and the task they want it to perform. Then HPT transfers the knowledge the transformer gained during pretraining to learn the new task.

Enabling dexterous motions

One of the biggest challenges of developing HPT was building the massive dataset to pretrain the transformer, which included 52 datasets with more than 200,000 robot trajectories in four categories, including human demo videos and simulation.

The researchers also needed to develop an efficient way to turn raw proprioception signals from an array of sensors into data the transformer could handle.

“Proprioception is key to enable a lot of dexterous motions. Because the number of tokens is in our architecture always the same, we place the same importance on proprioception and vision,” Wang explains.

When they tested HPT, it improved robot performance by more than 20% on simulation and real-world tasks, compared with training from scratch each time. Even when the task was very different from the pretraining data, HPT still improved performance.

“This paper provides a novel approach to training a single policy across multiple robot embodiments. This enables training across diverse datasets, enabling robot learning methods to significantly scale up the size of datasets that they can train on. It also allows the model to quickly adapt to new robot embodiments, which is important as new robot designs are continuously being produced,” says David Held, associate professor at the Carnegie Mellon University Robotics Institute, who was not involved with this work.

In the future, the researchers want to study how data diversity could boost the performance of HPT. They also want to enhance HPT so it can process unlabeled data like GPT-4 and other large language models.

“Our dream is to have a universal robot brain that you could download and use for your robot without any training at all. While we are just in the early stages, we are going to keep pushing hard and hope scaling leads to a breakthrough in robotic policies, like it did with large language models,” he says.

More information:
Lirui Wang et al, Scaling Proprioceptive-Visual Learning with Heterogeneous Pre-trained Transformers, arXiv (2024). DOI: 10.48550/arxiv.2409.20537

Journal information:
arXiv


This story is republished courtesy of MIT News (web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and teaching.

Citation:
A faster, better way to train general-purpose robots: New technique pools diverse data (2024, October 28)
retrieved 28 October 2024
from https://techxplore.com/news/2024-10-faster-general-purpose-robots-technique.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link