Monday, December 23, 2024
Home Blog Page 1384

Apple holds talks with rival Meta over AI: Report

0
Apple holds talks with rival Meta over AI: Report


Apple
Credit: Armand Valendez from Pexels

Apple is talking to major rival Meta about integrating the Facebook parent company’s generative AI into its products, as it tries to catch up with rivals on artificial intelligence, the Wall Street Journal reported Sunday.

The report comes after Apple also struck a deal with OpenAI, the creator of ChatGPT, to help equip its Apple Intelligence suite of new AI features for its coveted products.

For months, pressure has been on Apple to persuade doubters on its AI strategy, after Microsoft and Google rolled out products in rapid-fire succession.

It has developed its own, smaller artificial intelligence but said that it will turn to others such as OpenAI to boost its in-house offering.

According to the Journal, which cited sources close to the matter, Meta has held discussions with Apple over integrating its own generative AI model into Apple Intelligence.

Apple senior vice president of software engineering Craig Federighi said in early June that Apple also wanted to integrate capabilities from Google’s generative AI system, Gemini, into its devices.

The big challenge for Apple has been how to infuse ChatGPT-style AI—which voraciously feeds off data—into its products without weakening its heavily promoted user privacy and security, according to analysts.

Apple Intelligence will enable users to create their own emojis based on a description in everyday language, or to generate brief summaries of e-mails in the mailbox.

Apple said Siri, its voice assistant, will also get an AI-infused upgrade and now will appear as a pulsating light on the edge of your home screen.

Launched over 12 years ago, Siri has long since been seen as a dated feature, overtaken by the new generation of assistants, such as GPT-4o, OpenAI’s latest offering.

According to Canalys, 16 percent of smartphones shipped this year will be equipped with generative AI features, a proportion it expects to rise to 54 percent by 2028.

© 2024 AFP

Citation:
Apple holds talks with rival Meta over AI: Report (2024, June 24)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-apple-rival-meta-ai.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Wirelessly-powered relay will help bring 5G technology to smart factories, say researchers

0
Wirelessly-powered relay will help bring 5G technology to smart factories, say researchers


Wirelessly powered relay will help bring 5G technology to smart factories
A key innovation in this design is the use of the 5.7 GHz wireless power transfer (WPT) signal as both a means of generating DC power using a rectifier and as an oscillator for the mixing and unmixing circuits. By amplifying the input signal after down conversion to a lower frequency via mixing, this circuit achieves higher efficiency and gain. Credit: 2024 IEEE Symposium on VLSI Technology & Circuits

A recently-developed wirelessly-powered 5G relay could accelerate the development of smart factories, report scientists from Tokyo Tech. By adopting a lower operating frequency for wireless power transfer, the proposed relay design solves many of the current limitations, including range and efficiency. In turn, this allows for a more versatile and widespread arrangement of sensors and transceivers in industrial settings.

One of the hallmarks of the Information Age is the transformation of industries towards a greater flow of information. This can be readily seen in high-tech factories and warehouses, where wireless sensors and transceivers are installed in robots, production machinery, and automatic vehicles. In many cases, 5G networks are used to orchestrate operations and communications between these devices.

To avoid relying on cumbersome wired power sources, sensors and transceivers can be energized remotely via wireless power transfer (WPT). However, one problem with conventional WPT designs is that they operate at 24 GHz.

At such high frequencies, transmission beams must be extremely narrow to avoid energy losses. Moreover, power can only be transmitted if there is a clear line of sight between the WPT system and the target device. Since 5G relays are often used to extend the range of 5G base stations, WPT needs to reach even further, which is yet another challenge for 24 GHz systems.

To address the limitations of WPT, a research team from Tokyo Institute of Technology has come up with a clever solution. In a recent study, presented at the 2024 IEEE Symposium on VLSI Technology & Circuits, they developed a novel 5G relay that can be powered wirelessly at a lower frequency of 5.7 GHz.

“By using 5.7 GHz as the WPT frequency, we can get wider coverage than conventional 24 GHz WPT systems, enabling a wider range of devices to operate simultaneously,” explains senior author and Associate Professor Atsushi Shirane.

Wirelessly powered relay will help bring 5G technology to smart factories
The prototype of the proposed relay transceiver was fabricated with Si CMOS 65nm chips and 4×2 patch phased-array antenna board. Credit: 2024 IEEE Symposium on VLSI Technology & Circuits

The proposed wirelessly-powered relay is meant to act as an intermediary receiver and transmitter of 5G signals, which can originate from a 5G base station or wireless devices. The key innovation of this system is the use of a rectifier-type mixer, which performs 4th-order subharmonic mixing while also generating DC power.

Notably, the mixer uses the received 5.7 GHz WPT signal as a local signal. With this local signal, together with multiplying circuits, phase shifters, and a power combiner, the mixer ‘down-converts’ a received 28 GHz signal into a 5.2 GHz signal. Then, this 5.2 GHz signal is internally amplified, up-converted to 28 GHz through the inverse process, and retransmitted to its intended destination.

To drive these internal amplifiers, the proposed system first rectifies the 5.7 GHz WPT signal to produce DC power, which is managed by a dedicated power management unit. This ingenious approach offers several advantages, as Shirane highlights, “Since the 5.7 GHz WPT signal has less path loss than the 24 GHz signal, more power can be obtained from a rectifier. In addition, the 5.7 GHz rectifier has a lower loss than 24 GHz rectifiers and can operate at a higher power conversion efficiency.”

Finally, this proposed circuit design allows for selecting the transistor size, bias voltage, matching, cutoff frequency of the filter, and load to maximize conversion efficiency and conversion gain simultaneously.

Through several experiments, the research team showcased the capabilities of their proposed relay. Occupying only a 1.5 mm by 0.77 mm chip using standard CMOS technology, a single chip can output a high power of 6.45 mW at an input power of 10.7 dBm. Notably, multiple chips could be combined to achieve a higher power output. Considering its many advantages, the proposed 5.7 GHz WPT system could thus greatly contribute to the development of smart factories.

More information:
Presentation: A 28GHz 5GNR Wirelessly Powered Relay Transceiver Using Rectifier-Type 4th-Order Sub-Harmonic Mixer

Citation:
Wirelessly-powered relay will help bring 5G technology to smart factories, say researchers (2024, June 17)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-wirelessly-powered-relay-5g-technology.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Hidden humor, the software developer’s secret weapon

0
Hidden humor, the software developer's secret weapon


coder
Credit: Pixabay/CC0 Public Domain

Writing software code can be a painstaking and stressful process—and downright boring when the job is repetitive and you’re doing it remotely, alone in front of your screen.

To liven things up, many developers and testers use humor to relieve the monotony and connect with their virtual colleagues by sharing a joke. Over time, it creates a bond with fellow developers, though the humor and creativity slipped in between the lines of code are invisible to the rest of us.

“Humor creates relationships between people who are physically distant and is a good way to stave off boredom,” said Benoit Baudry, a professor in the Department of Computer Science and Operations Research at Université de Montréal. “It’s a way to build engagement.” Until recently, Baudry was at the Royal Institute of Technology in Stockholm, where he and his colleagues studied the special humor of developers.

“Developers are people who love software,” said Baudry. “So they try to create emotional bonds using the digital technology that is their work tool.”

But they have to exercise some caution about when and where they insert jokes and comments. They don’t want any of their jests to end up on Instagram.

To find out more about how they do it, Baudry and his fellow researchers circulated an online questionnaire that was posted on developer sites. More than 125 developers from around the world responded. They reported using humor most frequently in test inputs and “commits,” or changes to the code. A sly dialogue unfolds between the test lines.

The research was published in the Proceedings of the 46th International Conference on Software Engineering: Software Engineering in Society on June 6 and is also available on the arXiv preprint server.

Darth Vader, Luke et al

Baudry and his co-authors looked at Faker, a library that generates random data for use in testing code. Instead of lorem ipsum—a sequence of meaningless words commonly used as a placeholder for text on a page until it can be replaced by the real thing—developers will sprinkle their lines with cultural references such as allusions to Seinfeld or quotes from poets.

“Some references are fairly specific, others are universal: who doesn’t know the characters from Star Wars or The Matrix?” said Baudry. Characters from cult films are frequently used in titles, as are quotations. An example from Faker: “The wise animal blends into its surroundings” (a quote from the movies Dune). Nothing edgy or inappropriate, just light-hearted asides through which developers signal their interests and elicit a smile from their colleagues.

“Personally, I like to use characters from the 1998 film The Big Lebowski in error messages,” said Baudry. He is also enthusiastic about lolcommits, a utility that lets developers send a selfie when they make changes to code. “These pics foster bonds with colleagues and are a way to celebrate when the job is done,” he said.

The pioneer who paved the way

The trailblazer for quips in code was the brilliant NASA engineer and computer scientist Margaret Hamilton, who led the team that designed the system for the Apollo 11 lunar landing program in 1969. When the code was made public, people could see that it was peppered with jokes, Shakespeare quotes and references to The Wizard of Oz.

Humor in code “helps keep it fun,” one of the respondents to the survey commented. “I love it and think fondly of people writing that part of the code or comment.”

Humor “makes a codebase feel more humanized, like it was created by a real person,” another respondent said.

Naturally, there are limits to the kind of humor that can be injected into code. “It should not create a toxic or unwelcoming culture,” cautioned one respondent.

Baudry’s interest in tech humor is not new. Last year, he published a fascinating article on “Easter eggs,” features hidden in software which can be unlocked by pressing a combination of keys or correctly positioning the pointer. But unlike code humor, Easter eggs can be discovered by the public, especially in video games.

Baudry also wants the users of technology to be more aware of the behind-the-scenes human activity that produces the thousands of connections and apps that are woven into our lives. In the past, he has given talks on art and technology while projecting code onto giant screens in public places. For the love of code!

More information:
Deepika Tiwari et al, With Great Humor Comes Great Developer Engagement, Proceedings of the 46th International Conference on Software Engineering: Software Engineering in Society (2024). DOI: 10.1145/3639475.3640099. On arXiv: DOI: 10.48550/arxiv.2312.01680

Journal information:
arXiv


Citation:
Hidden humor, the software developer’s secret weapon (2024, June 21)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-hidden-humor-software-secret-weapon.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Managing screen time by making phones slightly more annoying to use

0
Managing screen time by making phones slightly more annoying to use


Managing screen time by making phones slightly more annoying to use
Existing apps for managing screen time can abruptly lock users out of their phone. If users were in the middle of an important task, they could scramble to skip the time limit, opening themselves up to spend more time on their phone than originally intended. InteractOut’s interventions are more gradual and allow users to decide when to put down their phones while also encouraging them to think harder about their smartphone use. Credit: Jeremy Little, Michigan Engineering

The best way to help smartphone users manage their screen time may be to make phones progressively more annoying to use, according to new University of Michigan research.

The study, published in Proceedings of the CHI Conference on Human Factors in Computing Systems, shows that interfering with swiping and tapping functions is around 16% more effective at reducing screen time and the number of times an app is opened than forcibly locking users out of their phones.

The lockout strategy is used by many screen-time management apps today, and such apps also send users a notification offering more time before locking. Researchers discussed the findings Monday, May 13, at the Association for Computing Machinery’s Conference on Human Factors in Computing Systems (CHI 2024) in Honolulu.

“Lockout apps are pretty disruptive, so if someone is in the middle of an important task or a game, they’ll scramble to skip through the screen timer. Then, they can forget about the time limit and spend more time on the phone than they wanted to,” said Anhong Guo, U-M assistant professor of computer science and engineering and the corresponding author of the study.






Credit: University of Michigan

The researchers’ InteractOut app is more effective at limiting screen time because it is less restrictive and harder to ignore than hard lockouts. Once the user’s designated screen limit has been reached, InteractOut can delay the phone’s response to a user’s gesture, shift where tapping motions are registered or slow the screen scrolling speed.

The strength of the delays and shifts continues to increase each time the user touches the phone, up to a pre-set maximum, and the user can decide how the app interferes with their phone use. The app’s gradual interference allows users to continue using their phone, but with a little extra difficulty.

“If we just continuously add a little bit of friction to the interaction with the phone, eventually the user becomes more aware of what they are doing because there’s a mismatch between what they expect to happen and what actually happens. That makes using smartphones a more deliberate process,” said Guo, who also is an assistant professor of information.

The researchers believe that forcing more mindfulness into otherwise mindless gesturing is the key to making smartphones less addictive.

“We want to evoke users’ awareness of using their smartphone so that they can use it more productively,” said study first author Tao Lu, a recent U-M bachelor of science graduate in computer science who is now a master’s student at the Georgia Institute of Technology.

While designing InteractOut, the researchers had to be careful not to make the phone so inconvenient that it became insufferable. To ensure their software strikes the right balance, the team tested their app’s performance in a field study of 42 participants that took place over five weeks.

In the first week of the study, the researchers reviewed how often each participant used their phones without screen time management tools. Then, each participant installed the InteractOut app on their Android phone and chose which other apps it could monitor and interfere with. The researchers fixed the participants’ daily screen time allowance to one hour, after which InteractOut began to modify swipe and tap functions inside of the specified apps.

All participants received a random swipe and tap intervention from InteractOut for a single two-week period, and their screen time was compared to a separate two-week period in which they used Timed Lockout, a widely available app that imposes hard lockouts.

The researchers found that InteractOut was not only more effective at reducing screen time for the targeted apps than a hard lockout, but it was also better received by the study participants. When the screen-time management apps were activated, around 62% of the participants kept InteractOut’s interventions on for the day, but only 36% of the participants did the same with Timed Lockout.

There is still room for improvement, however. The participants thought that InteractOut was too intrusive for some games that require precise, real-time movements. It was also less effective at limiting the amount of time spent on apps that require few tapping or swiping gestures, such as video streaming services. Guo plans to find ways to tailor the app’s interventions to be better suited for different kinds of phone apps.

The team has submitted an invention disclosure for the software and hope to eventually bring it to market.

More information:
Tao Lu et al, InteractOut: Leveraging Interaction Proxies as Input Manipulation Strategies for Reducing Smartphone Overuse, Proceedings of the CHI Conference on Human Factors in Computing Systems (2024). DOI: 10.1145/3613904.3642317

Citation:
Managing screen time by making phones slightly more annoying to use (2024, May 14)
retrieved 24 June 2024
from https://techxplore.com/news/2024-05-screen-slightly-annoying.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

California lawmakers are trying to regulate AI before it’s too late

0
California lawmakers are trying to regulate AI before it's too late


openai
Credit: Unsplash/CC0 Public Domain

For four years, Jacob Hilton worked for one of the most influential startups in the Bay Area—OpenAI. His research helped test and improve the truthfulness of AI models such as ChatGPT. He believes artificial intelligence can benefit society, but he also recognizes the serious risks if the technology is left unchecked.

Hilton was among 13 current and former OpenAI and Google employees who this month signed an open letter that called for more whistleblower protections, citing broad confidentiality agreements as problematic.

“The basic situation is that employees, the people closest to the technology, they’re also the ones with the most to lose from being retaliated against for speaking up,” says Hilton, 33, now a researcher at the nonprofit Alignment Research Center, who lives in Berkeley, California.

California legislators are rushing to address such concerns through roughly 50 AI-related bills, many of which aim to place safeguards around the rapidly evolving technology, which lawmakers say could cause societal harm.

However, groups representing large tech companies argue that the proposed legislation could stifle innovation and creativity, causing California to lose its competitive edge and dramatically change how AI is developed in the state.

The effects of artificial intelligence on employment, society and culture are wide reaching, and that’s reflected in the number of bills circulating the Legislature. They cover a range of AI-related fears, including job replacement, data security and racial discrimination.

One bill, co-sponsored by the Teamsters, aims to mandate human oversight on driverless heavy-duty trucks. A bill backed by the Service Employees International Union attempts to ban the automation or replacement of jobs by AI systems at call centers that provide public benefit services, such as Medi-Cal. Another bill, written by Sen. Scott Wiener, D-San Francisco, would require companies developing large AI models to do safety testing.

The plethora of bills come after politicians were criticized for not cracking down hard enough on social media companies until it was too late. During the Biden administration, federal and state Democrats have become more aggressive in going after big tech firms.

“We’ve seen with other technologies that we don’t do anything until well after there’s a big problem,” Wiener said. “Social media had contributed many good things to society … but we know there have been significant downsides to social media, and we did nothing to reduce or to mitigate those harms. And now we’re playing catch-up. I prefer not to play catch-up.”

The push comes as AI tools are quickly progressing. They read bedtime stories to children, sort drive-through orders at fast food locations and help make music videos. While some tech enthusiasts enthuse about AI’s potential benefits, others fear job losses and safety issues.

“It caught almost everybody by surprise, including many of the experts, in how rapidly (the tech is) progressing,” said Dan Hendrycks, director of the San Francisco-based nonprofit Center for AI Safety. “If we just delay and don’t do anything for several years, then we may be waiting until it’s too late.”

Wiener’s bill, SB1047, which is backed by the Center for AI Safety, calls for companies building large AI models to conduct safety testing and have the ability to turn off models that they directly control.

The bill’s proponents say it would protect against situations such as AI being used to create biological weapons or shut down the electrical grid, for example. The bill also would require AI companies to implement ways for employees to file anonymous concerns. The state attorney general could sue to enforce safety rules.

“Very powerful technology brings both benefits and risks, and I want to make sure that the benefits of AI profoundly outweigh the risks,” Wiener said.

Opponents of the bill, including TechNet, a trade group that counts tech companies including Meta, Google and OpenAI among its members, say policymakers should move cautiously. Meta and OpenAI did not return a request for comment. Google declined to comment.

“Moving too quickly has its own sort of consequences, potentially stifling and tamping down some of the benefits that can come with this technology,” said Dylan Hoffman, executive director for California and the Southwest for TechNet.

The bill passed the Assembly Privacy and Consumer Protection Committee on Tuesday and will next go to the Assembly Judiciary Committee and Assembly Appropriations Committee, and if it passes, to the Assembly floor.

Proponents of Wiener’s bill say they’re responding to the public’s wishes. In a poll of 800 potential voters in California commissioned by the Center for AI Safety Action Fund, 86% of participants said it was an important priority for the state to develop AI safety regulations. According to the poll, 77% of participants supported the proposal to subject AI systems to safety testing.

“The status quo right now is that, when it comes to safety and security, we’re relying on voluntary public commitments made by these companies,” said Hilton, the former OpenAI employee. “But part of the problem is that there isn’t a good accountability mechanism.”

Another bill with sweeping implications for workplaces is AB 2930, which seeks to prevent “algorithmic discrimination,” or when automated systems put certain people at a disadvantage based on their race, gender or sexual orientation when it comes to hiring, pay and termination.

“We see example after example in the AI space where outputs are biased,” said Assemblymember Rebecca Bauer-Kahan, D-Orinda.

The anti-discrimination bill failed in last year’s legislative session, with major opposition from tech companies. Reintroduced this year, the measure initially had backing from high-profile tech companies Workday and Microsoft, although they have wavered in their support, expressing concerns over amendments that would put more responsibility on firms developing AI products to curb bias.

“Usually, you don’t have industries saying, ‘Regulate me,’ but various communities don’t trust AI, and what this effort is trying to do is build trust in these AI systems, which I think is really beneficial for industry,” Bauer-Kahan said.

Some labor and data privacy advocates worry that language in the proposed anti-discrimination legislation is too weak. Opponents say it’s too broad.

Chandler Morse, head of public policy at Workday, said the company supports AB 2930 as introduced. “We are currently evaluating our position on the new amendments,” Morse said.

Microsoft declined to comment.

The threat of AI is also a rallying cry for Hollywood unions. The Writers Guild of America and the Screen Actors Guild-American Federation of Television and Radio Artists negotiated AI protections for their members during last year’s strikes, but the risks of the tech go beyond the scope of union contracts, said actors guild National Executive Director Duncan Crabtree-Ireland.

“We need public policy to catch up and to start putting these norms in place so that there is less of a Wild West kind of environment going on with AI,” Crabtree-Ireland said.

SAG-AFTRA has helped draft three federal bills related to deepfakes (misleading images and videos often involving celebrity likenesses), along with two measures in California, including AB 2602, that would strengthen worker control over use of their digital image. The legislation, if approved, would require that workers be represented by their union or legal counsel for agreements involving AI-generated likenesses to be legally binding.

Tech companies urge caution against overregulation. Todd O’Boyle, of the tech industry group Chamber of Progress, said California AI companies may opt to move elsewhere if government oversight becomes overbearing. It’s important for legislators to “not let fears of speculative harms drive policymaking when we’ve got this transformative, technological innovation that stands to create so much prosperity in its earliest days,” he said.

When regulations are put in place, it’s hard to roll them back, warned Aaron Levie, chief executive of the Redwood City, California-based cloud computing company Box, which is incorporating AI into its products.

“We need to actually have more powerful models that do even more and are more capable,” Levie said, “and then let’s start to assess the risk incrementally from there.”

But Crabtree-Ireland said tech companies are trying to slow-roll regulation by making the issues seem more complicated than they are and by saying they need to be solved in one comprehensive public policy proposal.

“We reject that completely,” Crabtree-Ireland said. “We don’t think everything about AI has to be solved all at once.”

2024 Los Angeles Times. Distributed by Tribune Content Agency, LLC.

Citation:
California lawmakers are trying to regulate AI before it’s too late (2024, June 24)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-california-lawmakers-ai-late.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link