Thursday, January 9, 2025
Home Blog Page 1496

California lawmakers are trying to regulate AI before it’s too late

0
California lawmakers are trying to regulate AI before it's too late


openai
Credit: Unsplash/CC0 Public Domain

For four years, Jacob Hilton worked for one of the most influential startups in the Bay Area—OpenAI. His research helped test and improve the truthfulness of AI models such as ChatGPT. He believes artificial intelligence can benefit society, but he also recognizes the serious risks if the technology is left unchecked.

Hilton was among 13 current and former OpenAI and Google employees who this month signed an open letter that called for more whistleblower protections, citing broad confidentiality agreements as problematic.

“The basic situation is that employees, the people closest to the technology, they’re also the ones with the most to lose from being retaliated against for speaking up,” says Hilton, 33, now a researcher at the nonprofit Alignment Research Center, who lives in Berkeley, California.

California legislators are rushing to address such concerns through roughly 50 AI-related bills, many of which aim to place safeguards around the rapidly evolving technology, which lawmakers say could cause societal harm.

However, groups representing large tech companies argue that the proposed legislation could stifle innovation and creativity, causing California to lose its competitive edge and dramatically change how AI is developed in the state.

The effects of artificial intelligence on employment, society and culture are wide reaching, and that’s reflected in the number of bills circulating the Legislature. They cover a range of AI-related fears, including job replacement, data security and racial discrimination.

One bill, co-sponsored by the Teamsters, aims to mandate human oversight on driverless heavy-duty trucks. A bill backed by the Service Employees International Union attempts to ban the automation or replacement of jobs by AI systems at call centers that provide public benefit services, such as Medi-Cal. Another bill, written by Sen. Scott Wiener, D-San Francisco, would require companies developing large AI models to do safety testing.

The plethora of bills come after politicians were criticized for not cracking down hard enough on social media companies until it was too late. During the Biden administration, federal and state Democrats have become more aggressive in going after big tech firms.

“We’ve seen with other technologies that we don’t do anything until well after there’s a big problem,” Wiener said. “Social media had contributed many good things to society … but we know there have been significant downsides to social media, and we did nothing to reduce or to mitigate those harms. And now we’re playing catch-up. I prefer not to play catch-up.”

The push comes as AI tools are quickly progressing. They read bedtime stories to children, sort drive-through orders at fast food locations and help make music videos. While some tech enthusiasts enthuse about AI’s potential benefits, others fear job losses and safety issues.

“It caught almost everybody by surprise, including many of the experts, in how rapidly (the tech is) progressing,” said Dan Hendrycks, director of the San Francisco-based nonprofit Center for AI Safety. “If we just delay and don’t do anything for several years, then we may be waiting until it’s too late.”

Wiener’s bill, SB1047, which is backed by the Center for AI Safety, calls for companies building large AI models to conduct safety testing and have the ability to turn off models that they directly control.

The bill’s proponents say it would protect against situations such as AI being used to create biological weapons or shut down the electrical grid, for example. The bill also would require AI companies to implement ways for employees to file anonymous concerns. The state attorney general could sue to enforce safety rules.

“Very powerful technology brings both benefits and risks, and I want to make sure that the benefits of AI profoundly outweigh the risks,” Wiener said.

Opponents of the bill, including TechNet, a trade group that counts tech companies including Meta, Google and OpenAI among its members, say policymakers should move cautiously. Meta and OpenAI did not return a request for comment. Google declined to comment.

“Moving too quickly has its own sort of consequences, potentially stifling and tamping down some of the benefits that can come with this technology,” said Dylan Hoffman, executive director for California and the Southwest for TechNet.

The bill passed the Assembly Privacy and Consumer Protection Committee on Tuesday and will next go to the Assembly Judiciary Committee and Assembly Appropriations Committee, and if it passes, to the Assembly floor.

Proponents of Wiener’s bill say they’re responding to the public’s wishes. In a poll of 800 potential voters in California commissioned by the Center for AI Safety Action Fund, 86% of participants said it was an important priority for the state to develop AI safety regulations. According to the poll, 77% of participants supported the proposal to subject AI systems to safety testing.

“The status quo right now is that, when it comes to safety and security, we’re relying on voluntary public commitments made by these companies,” said Hilton, the former OpenAI employee. “But part of the problem is that there isn’t a good accountability mechanism.”

Another bill with sweeping implications for workplaces is AB 2930, which seeks to prevent “algorithmic discrimination,” or when automated systems put certain people at a disadvantage based on their race, gender or sexual orientation when it comes to hiring, pay and termination.

“We see example after example in the AI space where outputs are biased,” said Assemblymember Rebecca Bauer-Kahan, D-Orinda.

The anti-discrimination bill failed in last year’s legislative session, with major opposition from tech companies. Reintroduced this year, the measure initially had backing from high-profile tech companies Workday and Microsoft, although they have wavered in their support, expressing concerns over amendments that would put more responsibility on firms developing AI products to curb bias.

“Usually, you don’t have industries saying, ‘Regulate me,’ but various communities don’t trust AI, and what this effort is trying to do is build trust in these AI systems, which I think is really beneficial for industry,” Bauer-Kahan said.

Some labor and data privacy advocates worry that language in the proposed anti-discrimination legislation is too weak. Opponents say it’s too broad.

Chandler Morse, head of public policy at Workday, said the company supports AB 2930 as introduced. “We are currently evaluating our position on the new amendments,” Morse said.

Microsoft declined to comment.

The threat of AI is also a rallying cry for Hollywood unions. The Writers Guild of America and the Screen Actors Guild-American Federation of Television and Radio Artists negotiated AI protections for their members during last year’s strikes, but the risks of the tech go beyond the scope of union contracts, said actors guild National Executive Director Duncan Crabtree-Ireland.

“We need public policy to catch up and to start putting these norms in place so that there is less of a Wild West kind of environment going on with AI,” Crabtree-Ireland said.

SAG-AFTRA has helped draft three federal bills related to deepfakes (misleading images and videos often involving celebrity likenesses), along with two measures in California, including AB 2602, that would strengthen worker control over use of their digital image. The legislation, if approved, would require that workers be represented by their union or legal counsel for agreements involving AI-generated likenesses to be legally binding.

Tech companies urge caution against overregulation. Todd O’Boyle, of the tech industry group Chamber of Progress, said California AI companies may opt to move elsewhere if government oversight becomes overbearing. It’s important for legislators to “not let fears of speculative harms drive policymaking when we’ve got this transformative, technological innovation that stands to create so much prosperity in its earliest days,” he said.

When regulations are put in place, it’s hard to roll them back, warned Aaron Levie, chief executive of the Redwood City, California-based cloud computing company Box, which is incorporating AI into its products.

“We need to actually have more powerful models that do even more and are more capable,” Levie said, “and then let’s start to assess the risk incrementally from there.”

But Crabtree-Ireland said tech companies are trying to slow-roll regulation by making the issues seem more complicated than they are and by saying they need to be solved in one comprehensive public policy proposal.

“We reject that completely,” Crabtree-Ireland said. “We don’t think everything about AI has to be solved all at once.”

2024 Los Angeles Times. Distributed by Tribune Content Agency, LLC.

Citation:
California lawmakers are trying to regulate AI before it’s too late (2024, June 24)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-california-lawmakers-ai-late.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

New soft robotic gripper designed with graphene and liquid crystals

0
New soft robotic gripper designed with graphene and liquid crystals


Graphene plus liquid crystals equals 'Hot Fingers'
Credit: Laura van Hazendonk

Eindhoven researchers have developed a soft robotic “hand” made from liquid crystals and graphene that could be used to design future surgical robots. The new work has just been published in the journal ACS Applied Materials & Interfaces.

In our future hospitals, soft robots might be used as surgical robots. But before that can happen, researchers need to figure out how to precisely control and move these deformable robots. Added to that, many current soft robots contain metals, which means that their use in water-rich environments—like the human body—is rather limited.

TU/e researchers led by Ph.D. candidate Laura van Hazendonk, Zafeiris Khalil (as part of his master’s research), Michael Debije, and Heiner Friedrich have designed a soft robotic hand or gripper made from graphene and liquid crystals (both organic materials). This opens the possibilities for such a device to be potentially and safely used in surgeries in the future.

Robots have an enormous influence on our world. For instance, in industry, robots build automobiles and televisions. In hospitals, robots—such as the da Vinci robotic surgical system—assist surgeons and allows for minimally invasive operations. And some of us even have robots to do our vacuum cleaning at home.

“Society has become dependent on robots, and we’re coming up with new ways to use them,” says Van Hazendonk, Ph.D. researcher in the Department of Chemical Engineering and Chemistry. “But in devising new ways to use them, we need to think about using different types of materials to make them.”

Thinking soft

The different materials that Van Hazendonk is referring to are fluids, gels, and elastic materials—which are all easily deformable. “Typically, robots are made from metals, which are rigid and hard. But in certain applications, robots made from hard and rigid materials limit the performance of the robot,” says Van Hazendonk. “The solution is to think soft.”

In soft robotics, the goal is to make robots from materials like fluids or gels that can deform in certain situations and then can act like robots made from traditional rigid and hard materials.

One area where soft robots look set to have a major impact is in surgical procedures. Van Hazendonk adds, “For a surgeon, many operations can be complex and delicate, and therefore require precise dexterity on the part of the surgeon. Sometimes this just isn’t possible, and they turn to robots.

“But rigid robots may not be able to access some areas with ease either. That’s where soft robots can come to the fore, and our goal was to offer the potential new helping hand for use in clamping and suturing used devices in surgeries, for example.”

Turning to Nobel materials

For their research, Van Hazendonk and her colleagues opted to use a different type of deformable material—liquid crystals—along with graphene to make a soft gripper device or “hand” with four controllable and deformable “fingers.”

Intriguingly, both liquid crystals and graphene are directly or indirectly connected to Nobel Prizes in Physics over the last 30 years or so. Back in 1991, Pierre-Gilles de Gennes won the prize for his work on order in complex matter, such as liquid crystals. And in 2010, Andre Geim and Konstantin Novoselov won the prize for their work on graphene—the super-strong material that is also transparent and an effective conductor of electricity and heat.

“A liquid crystal behaves as a liquid or a solid depending on how it is excited or perturbed. When it flows, it acts like a liquid. But in special situations, the molecules in the liquid can arrange themselves to create a regular pattern or structure, such as a crystal you would see in a solid material under a powerful microscope,” explains Van Hazendonk. “The ability for liquid crystal materials to act like this is perfect when it comes to making soft robots.”

Graphene plus liquid crystals equals 'Hot Fingers'
Credit: ACS Applied Materials & Interfaces (2024). DOI: 10.1021/acsami.4c06130

Actuator challenge

With the materials selected, the researchers set out to design and make an actuator. “Actuators control and regulate motion in robotic systems. Usually, the actuator responds or moves when supplied with electricity, air, or a fluid,” says Van Hazendonk. “In our work, we turned to something else to drive liquid-crystal network (LCN) actuators.”

The researchers designed a gripper device with four ‘fingers’ controlled using LCN actuators that are deformed thanks to the effect of heat on graphene-based heating elements or tracks in the fingers of gripper or ‘hand.”

Bending of the fingers

“When electrical current passes through the black graphene tracks, the tracks heat up and then the heat from the tracks changes the molecular structure of the liquid crystal fingers and some of the molecules go from being ordered to disordered. This leads to bending of the fingers,” says Van Hazendonk. “Once the electrical current is switched off, the heat is lost, and the gripper returns to its initial state.”

One of the biggest challenges for the researchers related to the graphene heating elements as highlighted by Heiner Friedrich, assistant professor at the Department of Chemical Engineering and Chemistry.

“We needed to make sure that they would heat to the right temperature to change the liquid crystal layer, and we needed to make sure that this could be done at safe voltages. Initially, the graphene elements didn’t reach the right temperatures at safe voltages, or they would overheat and burn the device,” says Friedrich. “This and many other important problems were solved by Zafeiris Khalil during his MSc thesis.”

The researchers didn’t let this problem deter them, and in the end, they designed an actuator that can operate without any issues at voltages less than 15 volts. And in terms of performance, the grippers can lift small objects with a mass between 70 and 100 milligrams. “This might not sound like a lot, but in medical applications such as surgery, this can be useful for the exact and miniscule movement of tiny tools, implants, or biological tissue,” says Van Hazendonk.

For Van Hazendonk—who combines her Ph.D. research with being a member of the provincial parliament of Noord-Brabant (Provinciale Staten)—this research has been eye-opening for her.

She says, “I love how this work combines a useful and tangible application. The gripper device is based on fundamental technologies, but the actuator itself could form the basis for a suite of robots for use in biomedical or surgical applications in the future.”

And, in the future, Van Hazendonk and her colleagues have some interesting plans. She concludes, “We want to make a fully printed robot by figuring out a way to 3D-print the liquid-crystal layer. For our gripper, we made the layer by casting materials in a mold. Other researchers in the group of Michael Debije have shown that liquid crystals can be printed. For this gripper, we have printed the graphene layer, so it would be cool to have a fully printed device.”

More information:
Laura S. van Hazendonk et al, Hot Fingers: Individually Addressable Graphene-Heater Actuated Liquid Crystal Grippers, ACS Applied Materials & Interfaces (2024). DOI: 10.1021/acsami.4c06130

Citation:
New soft robotic gripper designed with graphene and liquid crystals (2024, June 18)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-soft-robotic-gripper-graphene-liquid.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

A symbolic model checking approach to verify quantum circuits

0
A symbolic model checking approach to verify quantum circuits


Towards error-free quantum computing: A symbolic model checking approach to verify quantum circuits
The proposed model-checking approach can be used for the specification and verification of quantum circuits with their desired properties. Credit: PeerJ Computer Science (2024). DOI: 10.7717/peerj-cs.2098

Quantum computing is a rapidly growing technology that utilizes the laws of quantum physics to solve complex computational problems that are extremely difficult for classical computing. Researchers worldwide have developed many quantum algorithms to take advantage of quantum computing, demonstrating significant improvements over classical algorithms.

Quantum circuits, which are models of quantum computation, are crucial for developing these algorithms. They are used to design and implement quantum algorithms before actual deployment on quantum hardware.

Quantum circuits comprise a sequence of quantum gates, measurements, and initializations of qubits, among other actions. Quantum gates perform quantum computations by operating on qubits, which are the quantum counterparts of classical bits (0s and 1s), and by manipulating the quantum states of the system. Quantum states are the output of quantum circuits, which can be measured to obtain classical outcomes with probabilities, from which further actions can be done.

Since quantum computing is often counter-intuitive and dramatically different from classical computing, the probability of errors is much higher. Hence, it is necessary to verify that quantum circuits have the desired properties and function as intended. This can be done through model checking, a formal verification technique used to verify whether systems satisfy desired properties.

Although some model checkers are dedicated to quantum programs, there is a gap between model-checking quantum programs and quantum circuits due to different representations and no iterations in quantum circuits.

Addressing this gap, Assistant Professor Canh Minh Do and Professor Kazuhiro Ogata from Japan Advanced Institute of Science and Technology (JAIST) proposed a symbolic model checking approach.

Dr. Do explains, “Considering the success of model-checking methods for verification of classical circuits, model-checking of quantum circuits is a promising approach. We developed a symbolic approach for model checking of quantum circuits using laws of quantum mechanics and basic matrix operations using the Maude programming language.”

Their approach is detailed in a study published in the journal PeerJ Computer Science.

Maude is a high-level specification/programming language based on rewriting logic, which supports the formal specification and verification of complex systems. It is equipped with a Linear Temporal Logic (LTL) model checker, which checks whether systems satisfy the specified properties.

Additionally, Maude allows the creation of precise mathematical models of systems. The researchers formally specified quantum circuits in Maude, as a series of quantum gates and measurement applications, represented as basic matrix operations using laws of quantum mechanics with the Dirac notation. They specified the initial state and the desired properties of the system in LTL.

By using a set of quantum physics laws and basic matrix operations formalized in our specifications, quantum computation can be reasoned in Maude. They then used the built-in Maude LTL model checker to automatically verify whether quantum circuits satisfy the desired properties.

They used this approach to check several early quantum communication protocols, including Superdense Coding, Quantum Teleportation, Quantum Secret Sharing, Entanglement Swapping, Quantum Gate Teleportation, Two Mirror-image Teleportation, and Quantum Network Coding, each with increasing complexity.

They found that the original version of Quantum Gate Teleportation did not satisfy its desired property. By using this approach, the researchers notably proposed a revised version and confirmed its correctness.

These findings signify the importance of the proposed innovative approach for the verification of quantum circuits. However, the researchers also point out some limitations of their method, requiring further research.

Dr. Do says, “In the future, we aim to extend our symbolic reasoning to handle more quantum gates and more complicated reasoning on complex number operations. We also would like to apply our symbolic approach to model-checking quantum programs and quantum cryptography protocols.”

Verifying the intended operation of quantum circuits will be highly valuable in the upcoming era of quantum computing. In this context, the present approach marks the first step toward a general framework for the verification and specification of quantum circuits, paving the way for error-free quantum computing.

More information:
Canh Minh Do et al, Symbolic model checking quantum circuits in Maude, PeerJ Computer Science (2024). DOI: 10.7717/peerj-cs.2098

Citation:
Toward error-free quantum computing: A symbolic model checking approach to verify quantum circuits (2024, June 21)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-error-free-quantum-approach-circuits.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

AI-powered noise-filtering headphones give users the power to choose what to hear

0
AI-powered noise-filtering headphones give users the power to choose what to hear


AI-powered headphones filter only unwanted noise #ASA186
Researchers augmented noise-canceling headphones with a smartphone-based neural network to identify ambient sounds and preserve them while filtering out everything else. Credit: Shyam Gollakota

Noise-canceling headphones are a godsend for living and working in loud environments. They automatically identify background sounds and cancel them out for much-needed peace and quiet. However, typical noise-canceling fails to distinguish between unwanted background sounds and crucial information, leaving headphone users unaware of their surroundings.

Shyam Gollakota, from the University of Washington, is an expert in using AI tools for real-time audio processing. His team created a system for targeted speech hearing in noisy environments and developed AI-based headphones that selectively filter out specific sounds while preserving others. He presents his work May 16, as part of a joint meeting of the Acoustical Society of America and the Canadian Acoustical Association, running May 13–17 at the Shaw Center located in downtown Ottawa, Ontario, Canada.

“Imagine you are in a park, admiring the sounds of chirping birds, but then you have the loud chatter of a nearby group of people who just can’t stop talking,” said Gollakota. “Now imagine if your headphones could grant you the ability to focus on the sounds of the birds while the rest of the noise just goes away. That is exactly what we set out to achieve with our system.”

Gollakota and his team combined noise-canceling technology with a smartphone-based neural network trained to identify 20 different environmental sound categories. These include alarm clocks, crying babies, sirens, car horns, and birdsong. When a user selects one or more of these categories, the software identifies and plays those sounds through the headphones in real time while filtering out everything else.

Making this system work seamlessly was not an easy task, however.

“To achieve what we want, we first needed a high-level intelligence to identify all the different sounds in an environment,” said Gollakota.

“Then, we needed to separate the target sounds from all the interfering noises. If this is not hard enough, whatever sounds we extracted needed to sync with the user’s visual senses, since they cannot be hearing someone two seconds too late. This means the neural network algorithms must process sounds in real time in under a hundredth of a second, which is what we achieved.”

The team employed this AI-powered approach to focus on human speech. Relying on similar content-aware techniques, their algorithm can identify a speaker and isolate their voice from ambient noise in real time for clearer conversations.

Gollakota is excited to be at the forefront of the next generation of audio devices.

“We have a very unique opportunity to create the future of intelligent hearables that can enhance human hearing capability and augment intelligence to make lives better,” said Gollakota.

More information:
Technical program: https://eppro02.ativ.me/src/EventPilot/php/express/web/planner.php?id=ASASPRING24

Citation:
AI-powered noise-filtering headphones give users the power to choose what to hear (2024, May 16)
retrieved 24 June 2024
from https://techxplore.com/news/2024-05-ai-powered-noise-filtering-headphones.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Europe’s fight with Big Tech

0
Europe's fight with Big Tech


Tech giants have been targeted by the EU for a number of allegedly unfair practices
Tech giants have been targeted by the EU for a number of allegedly unfair practices.

The European Union warned Apple on Monday that its App Store is breaching its digital competition rules, placing the iPhone maker at risk of billions of dollars in fines.

It is the latest in a years-long battle between Brussels and giant tech firms, covering subjects from data privacy to disinformation.

Stifling competition

Brussels has doled out over 10 billion euros in fines to tech firms for abusing their dominant market positions.

The latest threat for Apple comes three months after the bloc hit the California firm with a 1.8-billion-euro ($1.9 billion) penalty for preventing European users from accessing information about cheaper music streaming services.

Among big tech firms, only Google has faced a bigger single antitrust fine—more than four billion euros in 2018 for using its Android mobile operating system to promote its search engine.

Google has also incurred billion-plus fines for abusing its power in the online shopping and advertising sectors.

The European Commission, the EU’s executive, recommended last year that Google should sell parts of its business and could face a fine of up to 10 percent of its global revenue if it fails to comply.

Privacy

Ireland issues the stiffest fines on data privacy as the laws are enforced by local regulators and Dublin hosts the European offices of several big tech firms.

The Irish regulator handed TikTok a 345-million-euro penalty for mishandling children’s data last September just months after it hit Meta with a record fine of 1.2 billion euros for illegally transferring personal data between Europe and the United States.

Luxembourg had previously held the record for data fines after it slapped Amazon with a 746-million-euro penalty in 2021.

Taxation

The EU has had little success in getting tech companies to pay more taxes in Europe, where they are accused of funneling profits into low-tax economies like Ireland and Luxembourg.

In one of the most notorious cases, the European Commission in 2016 ordered Apple to pay Ireland more than a decade in back taxes—13 billion euros—after ruling a sweetheart deal with the government was illegal.

But EU judges overturned the decision saying there was no evidence the company had broken the rules, a decision the commission has been trying to reverse ever since.

The commission is also fighting to reverse another court loss, after judges overruled its order for Amazon to repay 250 million euros in back taxes to Luxembourg.

Disinformation, hate speech

Web platforms have long faced accusations of failing to combat hate speech, disinformation and piracy.

The EU passed the Digital Services Act last year, which is designed to force companies to tackle these issues or face fines of up to six percent of their global turnover.

Already the bloc has begun to show how the DSA might be applied, opening probes on Facebook and Instagram for failing to tackle election-related disinformation.

The bloc has also warned Microsoft that the falsehoods generated by its AI search could fall foul of the DSA.

Paying for news

Google and other online platforms have also been accused of making billions from news without sharing the revenue with those who gather it.

To tackle this, the EU created a form of copyright called “neighboring rights” that allows print media to demand compensation for using their content.

France has been a test case for the rules and after initial resistance Google and Facebook both agreed to pay some French media for articles shown in web searches.

© 2024 AFP

Citation:
Dominance, data, disinformation: Europe’s fight with Big Tech (2024, June 24)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-dominance-disinformation-europe-big-tech.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link