Friday, January 31, 2025
Home Blog Page 1661

the search for AI’s next breakthrough

0
the search for AI's next breakthrough


Vinod Khosla, founder of Khosla Ventures, speaks on a panel on the main stage during the 2024 Collision tech conference in Toronto, Canada
Vinod Khosla, founder of Khosla Ventures, speaks on a panel on the main stage during the 2024 Collision tech conference in Toronto, Canada.

For a few days, AI chip juggernaut Nvidia sat on the throne as the world’s biggest company, but behind its staggering success are questions on whether new entrants can stake a claim to the artificial intelligence bonanza.

Nvidia, which makes the processors that are the only option to train generative AI’s large language models, is now Big Tech’s newest member and its stock market takeoff has lifted the whole sector.

Even tech’s second rung on Wall Street has ridden on Nvidia’s coattails with Oracle, Broadcom, HP and a spate of others seeing their stock valuations surge, despite sometimes shaky earnings.

Amid the champagne popping, startups seeking the attention of Silicon Valley venture capitalists are being asked to innovate — but without a clear indication of where the next chapter of AI will be written.

When it comes to generative AI, doubts persist on what exactly will be left for companies that are not existing model makers, a field dominated by Microsoft-backed OpenAI, Google and Anthropic.

Most agree that competing with them head-on could be a fool’s errand.

“I don’t think that there’s a great opportunity to start a foundational AI company at this point in time,” said Mike Myer, founder and CEO of tech firm Quiq, at the Collision technology conference in Toronto.

Some have tried to build applications that use or mimic the powers of the existing big models, but this is being slapped down by Silicon Valley’s biggest players.

“What I find disturbing is that people are not differentiating between those applications which are roadkill for the models as they progress in their capabilities, and those that are really adding value and will be here 10 years from now,” said venture capital veteran Vinod Khosla.

‘Won’t keep up’

The tough-talking Khosla is one of OpenAI’s earliest investors.

“Grammarly won’t keep up,” Khosla predicted of the spelling and grammar checking app, and others similar to it.

He said these companies, which put only a “thin wrapper” around what the AI models can offer, are doomed.

One of the fields ripe for the taking is chip design, Khosla said, with AI demanding ever more specialized processors that provide highly specific powers.

“If you look across the chip history, we really have for the most part focused on more general chips,” Rebecca Parsons, CTO at tech consultancy Thoughtworks, told AFP.

Providing more specialized processing for the many demands of AI is an opportunity seized by Groq, a hot startup that has built chips for the deployment of AI as opposed to its training, or inference — the specialty of Nvidia’s world-dominating GPUs.

Groq CEO Jonathan Ross told AFP that Nvidia won’t be the best at everything, even if they are uncontested for generative AI training.

“Nvidia and (its CEO) Jensen Huang are like Michael Jordan… the greatest of all time in basketball. But inference is baseball, and we try and forget the time where Michael Jordan tried to play baseball and wasn’t very good at it,” he said.

Another opportunity will come from highly specialized AI that will provide expertise and know-how based on proprietary data which won’t be co-opted by voracious big tech.

“Open AI and Google aren’t going to build a structural engineer. They’re not going to build products like a primary care doctor or a mental health therapist,” said Khosla.

Profiting from highly specialized data is the basis of Cohere, another of Silicon Valley’s hottest startups that pitches specifically-made models to businesses that are skittish about AI veering out of their control.

“Enterprises are skeptical of technology, and they’re risk-averse, and so we need to win their trust and to prove to them that there’s a way to adopt this technology that’s reliable, trustworthy and secure,” Cohere CEO Aidan Gomez told AFP.

When he was just 20 and working at Google, Gomez co-authored the seminal paper “Attention Is All You Need,” which introduced Transformer, the architecture behind popular large language models like OpenAI’s GPT-4.

The company has received funding from Nvidia and Salesforce Ventures and is valued in the billions of dollars.

© 2024 AFP

Citation:
Beyond Nvidia: the search for AI’s next breakthrough (2024, June 23)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-nvidia-ai-breakthrough.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

New 3D printing technique integrates electronics into microchannels to create flexible, stretchable microfluidic devices

0
New 3D printing technique integrates electronics into microchannels to create flexible, stretchable microfluidic devices


Flexible and stretchable microfluidic devices using direct printing of silicone-based, 3D microchannel networks
The injection of liquid metal into 3D-printed microchannels allowed forming electrical connections between 3D conductive networks and the embedded electronic elements, enabling the fabrication of flexible and stretchable microfluidic electronics such as skin-attachable NFC tags and wireless light-emitting devices. Credit: SUTD

The transition from traditional 2D to 3D microfluidic structures is a significant advancement in microfluidics, offering benefits in scientific and industrial applications. These 3D systems improve throughput through parallel operation, and soft elastomeric networks, when filled with conductive materials like liquid metal, allowing for the integration of microfluidics and electronics.

However, traditional methods such as soft lithography fabrication which requires cleanroom facilities have limitations in achieving fully automated 3D interconnected microchannels. The manual procedures involved in these methods, including polydimethylsiloxane (PDMS) molding and layer-to-layer alignment, hinder the automation potential of microfluidic device production.

3D printing is a promising alternative to traditional microfluidic fabrication methods. Photopolymerization techniques like stereolithography apparatus (SLA) and digital light processing (DLP) enable the creation of complex microchannels.

While photopolymerization allows for flexible devices, challenges remain in integrating external components such as electronic elements during light-based printing.

Extrusion-based methods like fused deposition modeling (FDM) and direct ink writing (DIW) offer automated fabrication but face difficulties in printing elastomeric hollow structures. The key challenge is finding an ink that balances softness for component embedding and robustness for structural integrity to achieve fully printed, interconnected microfluidic devices with embedded functionality.

As of now, existing 3D printing technologies have not simultaneously realized 1) direct printing of interconnected multilayered microchannels without supporting materials or post-processing and 2) integration of electronic elements during the printing process.

Researchers from the Singapore University of Technology and Design’s (SUTD) Soft Fluidics Lab addressed these two significant challenges in a study appearing in Advanced Functional Materials:

1. Direct printing of interconnected multilayered microchannels

The settings for DIW 3D printing were optimized to create support-free hollow structures for silicone sealant, ensuring that the extruded structure did not collapse. The research team further expanded this demonstration to fabricate interconnected multilayered microchannels with through-holes between layers; such geometries of microchannels (and electric wires) are often required for electronic devices such as antennas for wireless communication.

2. Integration of electronic components

Another challenge is the integration of electronic components into the microchannels during the 3D printing process. This is difficult to achieve with resins that cure immediately.

The research team took advantage of gradually curing resins to embed and immobilize the small electronic elements (such as RFID tags and LED chips). The self-alignment of those elements with microchannels allowed the self-assembly of the components with the electric wirings when liquid metal was perfused through the channel.

Why is this technology important?

While many electronic devices necessitate a 3D configuration of conductive wires, such as a jumper wire in a coil, this is challenging to achieve through conventional 3D printing methods.

The SUTD research team proposed a straightforward solution for realizing devices with such intricate configurations. By injecting liquid metal into a 3D multilayered microchannel containing embedded electronic components, self-assembly of conductive wires with these components is facilitated, enabling the streamlined fabrication of flexible and stretchable liquid metal coils.

To exemplify the practical advantages of this technology, the team created a skin-attachable radio-frequency identification (RFID) tag using a commercially available skin-adhesive plaster as a substrate and a free-standing flexible wireless light-emitting device with a compact footprint (21.4 mm × 15 mm).

The first demonstration underscores this solution’s ability to automate the production of stretchable printed circuits on a widely accepted, medically approved platform. The fabricated RFID tag demonstrated a high Q factor (~70) even after 1,000 cycles of tensile stress (50% strain), showcasing stability in the face of repeated deformations and adherence to the skin. Alternatively, the research team envisions employing small, flexible wireless optoelectronics for photodynamic therapy as medical implants on biological surfaces and lumens.

“Our technology will offer a new capability to realize the automated fabrication of stretchable printed circuits with 3D configuration of electrical circuits consisting of liquid metals,” says lead author of the paper, Dr. Kento Yamagishi, SUTD.

“The DIW 3D printing of elastomeric multilayered microchannels will enable the automated fabrication of fluidic devices with 3D arrangement of channels, including multifunctional sensors, multi-material mixers, and 3D tissue engineering scaffolds,” says Associate Professor Michinao Hashimoto, SUTD principal investigator.

More information:
Kento Yamagishi et al, Flexible and Stretchable Liquid‐Metal Microfluidic Electronics Using Directly Printed 3D Microchannel Networks, Advanced Functional Materials (2023). DOI: 10.1002/adfm.202311219

Citation:
New 3D printing technique integrates electronics into microchannels to create flexible, stretchable microfluidic devices (2024, June 12)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-3d-technique-electronics-microchannels-flexible.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Research into ‘hallucinating’ generative models advances reliability of artificial intelligence

0
Research into 'hallucinating' generative models advances reliability of artificial intelligence


Major research into 'hallucinating' generative models advances reliability of artificial intelligence
Overview of semantic entropy and confabulation detection. Credit: Nature (2024). DOI: 10.1038/s41586-024-07421-0

Researchers from the University of Oxford have made a significant advance toward ensuring that information produced by generative artificial intelligence (AI) is robust and reliable.

In a new study published in Nature, they demonstrate a novel method to detect when a large language model (LLM) is likely to “hallucinate” (i.e., invent facts that sound plausible but are imaginary).

This advance could open up new ways to deploy LLMs in situations where “careless errors” are costly such as legal or medical question-answering.

The researchers focused on hallucinations where LLMs give different answers each time it is asked a question—even if the wording is identical—known as confabulating.

“LLMs are highly capable of saying the same thing in many different ways, which can make it difficult to tell when they are certain about an answer and when they are literally just making something up,” said study author Dr. Sebastian Farquhar, from the University of Oxford’s Department of Computer Science.

“With previous approaches, it wasn’t possible to tell the difference between a model being uncertain about what to say versus being uncertain about how to say it. But our new method overcomes this.”

To do this, the research team developed a method grounded in statistics and using methods that estimate uncertainty based on the amount of variation (measured as entropy) between multiple outputs.

Their approach computes uncertainty at the level of meaning rather than sequences of words, i.e., it spots when LLMs are uncertain about the actual meaning of an answer, not just the phrasing. To do this, the probabilities produced by the LLMs, which state how likely each word is to be next in a sentence, are translated into probabilities over meanings.

The new method proved much better at spotting when a question was likely to be answered incorrectly than all previous methods, when tested against six open-source LLMs (including GPT-4 and LLaMA 2).

This was the case for a wide range of different datasets including answering questions drawn from Google searches, technical biomedical questions, and mathematical word problems. The researchers even demonstrated how semantic entropy can identify specific claims in short biographies generated by ChatGPT that are likely to be incorrect.

“Our method basically estimates probabilities in meaning-space, or ‘semantic probabilities,'” said study co-author Jannik Kossen (Department of Computer Science, University of Oxford). “The appeal of this approach is that it uses the LLMs themselves to do this conversion.”

By detecting when a prompt is likely to produce a confabulation, the new method can help make users of generative AI aware when the answers to a question are probably unreliable, and to allow systems built on LLMs to avoid answering questions likely to cause confabulations.

A key advantage to the technique is that it works across datasets and tasks without a priori knowledge, requiring no task-specific data, and robustly generalizes to new tasks not seen before. Although it can make the process several times more computationally costly than just using a generative model directly, this is clearly justified when accuracy is paramount.

Currently, hallucinations are a critical factor holding back wider adoption of LLMs like ChatGPT or Gemini. Besides making LLMs unreliable, for example by presenting inaccuracies in news articles and fabricating legal precedents, they can even be dangerous, for example when used in medical diagnosis.

The study’s senior author Yarin Gal, Professor of Computer Science at the University of Oxford and Director of Research at the UK’s AI Safety Institute, said, “Getting answers from LLMs is cheap, but reliability is the biggest bottleneck. In situations where reliability matters, computing semantic uncertainty is a small price to pay.”

Professor Gal’s research group, the Oxford Applied and Theoretical Machine Learning group, is home to this and other work pushing the frontiers of robust and reliable generative models. Building on this expertise, Professor Gal now acts as Director of Research at the UK’s AI Safety Institute.

The researchers highlight that confabulation is just one type of error that LLMs can make. “Semantic uncertainty helps with specific reliability problems, but this is only part of the story,” explained Dr. Farquhar.

“If an LLM makes consistent mistakes, this new method won’t catch that. The most dangerous failures of AI come when a system does something bad but is confident and systematic. There is still a lot of work to do.”

More information:
Sebastian Farquhar et al, Detecting hallucinations in large language models using semantic entropy, Nature (2024). DOI: 10.1038/s41586-024-07421-0

Karin Verspoor, ‘Fighting fire with fire’ — using LLMs to combat LLM hallucinations, Nature (2024). DOI: 10.1038/d41586-024-01641-0 , doi.org/10.1038/d41586-024-01641-0

Citation:
Research into ‘hallucinating’ generative models advances reliability of artificial intelligence (2024, June 20)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-hallucinating-generative-advances-reliability-artificial.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Keeping your data from Apple is harder than expected, finds study

0
Keeping your data from Apple is harder than expected, finds study


Keeping your data from Apple is harder than expected
The contrast between the steps users experience and the data handling processes involved at various stages of the device setup process. The user begins the process of setting up their device by purchasing a new device. Steps 1 – 18 explain the steps required for a complete setup of a user’s device, for instance, a MacBook (macOS 10.15+). Yellow bubbles denoted by letters A – H are summaries of Apple’s official privacy policy statement [3]. Bubbles A – H highlight examples of personal information collection occurring at various stages of the setup process. In addition to other data handling procedures, such as the location of the information stored (e.g., in F), users’ fingerprints are stored locally on the device. We note that there may be slight variations between the order of the presentations of these settings in iOS and macOS. Additionally, Siri (step 15) is not prompted during device setup in iPhone (iOS 14.0) The order of the diagram is based on the order of presentation of the settings on macOS. Credit: Privacy of Default Apps in Apple’s Mobile Ecosystem (2024)

“Privacy. That’s iPhone,” the slogan proclaims. New research from Aalto University begs to differ.

Study after study has shown how voluntary third-party apps erode people’s privacy. Now, for the first time, researchers at Aalto University have investigated the privacy settings of Apple’s default apps, the ones that are pretty much unavoidable on a new device, be it a computer, tablet, or mobile phone.

The researchers will present their findings in mid-May at the CHI conference, and the peer-reviewed research paper is already available online.

“We focused on apps that are an integral part of the platform and ecosystem. These apps are glued to the platform, and getting rid of them is virtually impossible,” says Associate Professor Janne Lindqvist, head of the computer science department at Aalto.

The researchers studied eight apps: Safari, Siri, Family Sharing, iMessage, FaceTime, Location Services, Find My and Touch ID. They collected all publicly available privacy-related information on these apps, from technical documentation to privacy policies and user manuals.

The fragility of the privacy protections surprised even the researchers.

“Due to the way the user interface is designed, users don’t know what is going on. For example, the user is given the option to enable or not enable Siri, Apple’s virtual assistant. But enabling only refers to whether you use Siri’s voice control. Siri collects data in the background from other apps you use, regardless of your choice, unless you understand how to go into the settings and specifically change that,” says Lindqvist.

Participants weren’t able to stop data sharing in any of the apps

In practice, protecting privacy on an Apple device requires persistent and expert clicking on each app individually. Apple’s help falls short.

“The online instructions for restricting data access are very complex and confusing, and the steps required are scattered in different places. There’s no clear direction on whether to go to the app settings, the central settings—or even both,” says Amel Bourdoucen, a doctoral researcher at Aalto.

In addition, the instructions didn’t list all the necessary steps or explain how collected data is processed.

The researchers also demonstrated these problems experimentally. They interviewed users and asked them to try changing the settings.

“It turned out that the participants weren’t able to prevent any of the apps from sharing their data with other applications or the service provider,” Bourdoucen says.

Finding and adjusting privacy settings also took a lot of time. “When making adjustments, users don’t get feedback on whether they’ve succeeded. They then get lost along the way, go backwards in the process and scroll randomly, not knowing if they’ve done enough,” Bourdoucen says.

In the end, Bourdoucen explains, the participants were able to take one or two steps in the right direction, but none succeeded in following the whole procedure to protect their privacy.

Running out of options

If preventing data sharing is difficult, what does Apple do with all that data?

It’s not possible to be sure based on public documents, but Lindqvist says it’s possible to conclude that the data will be used to train the artificial intelligence system behind Siri and to provide personalized user experiences, among other things.

Many users are used to seamless multi-device interaction, which makes it difficult to move back to a time of more limited data sharing. However, Apple could inform users much more clearly than it does today, says Lindqvist. The study lists a number of detailed suggestions to clarify privacy settings and improve guidelines.

For individual apps, Lindqvist says that the problem can be solved to some extent by opting for a third-party service. For example, some participants in the study had switched from Safari to Firefox.

Lindqvist can’t comment directly on how Google’s Android works in similar respects, as no one has yet done a similar mapping of its apps. But past research on third-party apps does not suggest that Google is any more privacy-conscious than Apple.

So what can be learned from all this—are users ultimately facing an almost impossible task?

“Unfortunately, that’s one lesson,” says Lindqvist.

More information:
Paper: Privacy of Default Apps in Apple’s Mobile Ecosystem

Provided by
Aalto University


Citation:
Keeping your data from Apple is harder than expected, finds study (2024, April 3)
retrieved 24 June 2024
from https://techxplore.com/news/2024-04-apple-harder.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Dutch app supermarket boss eyes tech boom in online delivery

0
Dutch app supermarket boss eyes tech boom in online delivery


Online deliveries will see a 'massive' boost from AI, says Picnic boss Michiel Muller
Online deliveries will see a ‘massive’ boost from AI, says Picnic boss Michiel Muller.

Advances in artificial intelligence are poised to drive a “massive” boom in online grocery deliveries, according to the head of Picnic, a Dutch app-only supermarket rapidly expanding into Germany and France.

Picnic has disrupted the Dutch supermarket landscape with its offer of free delivery in a time window of 20 minutes—made possible by squeezing efficiency out of huge amounts of data.

The firm already uses AI for a vast range of operations, explained CEO Michiel Muller, 59, at the firm’s 43,000-square-meter distribution hub in Utrecht, central Netherlands.

“For instance, predicting how many bananas we will sell in three weeks’ time. Or what happens when the weather is good or bad. Or doing our whole route planning,” he told AFP.

As technology improves and datasets grow, predictions will become more accurate, further reducing food waste and offering even more precise time slots for customers, he forecast.

“Don’t forget that supermarkets weren’t there 60 years ago. You only had smaller stores. So there’s always a movement around new technology and new ways of delivering goods.”

“The supermarket will remain. That’s for sure. Stores will remain. But the online part will grow massively,” he said.

Picnic has developed its own in-house software to fine-tune every element of the delivery process, from processing and packing stock at the warehouse to the famously complex “last mile” of dropping off the goods.

The warehouse has 14 kilometres of conveyor belts
The warehouse has 14 kilometers of conveyor belts.

Delivery times are calculated with extraordinary precision, with reams of information crunched by 300 data analysts and 300 software engineers at Picnic’s headquarters.

“We know exactly how long it takes to walk around the vehicle and when it’s dark outside, we add six seconds to the delivery time,” said Muller.

Unlike a physical supermarket, every order comes through on the app, so the firm knows exactly what it needs to order, deliver, and how long that should take.

The firm estimates this results in seven times less food waste than at regular supermarkets.

“There’s not a single baguette that is ordered and not delivered,” said Gregoire Borgoltz, head of Picnic’s operations in France.

The firm’s drivers in the ubiquitous white Picnic vans receive a rating after every trip based on their driving, even assessing whether they have sped too fast around corners.

Each delivery is meticulously tracked
Each delivery is meticulously tracked.

‘Level of automation’

The huge investments required in bespoke software, plus the firm’s distribution hubs with 14 kilometers (nine miles) of conveyor belts, means profits have been hard to come by.

Sales have risen from 10 million euros in 2016 to 1.25 billion in 2023, with staff levels soaring from 100 employees to 17,000 over the same period.

But Muller said the firm suffered losses of “around 200 million euros” last year due to expanding in Germany—opening slots in Berlin, Hamburg and Hannover.

For the first time since its 2015 founding, it finally turned in a gross profit this year in its home market. “It took eight years to be profitable in the Netherlands,” he said.

Earlier this year, the firm raised 355 million euros from investors to fund its push into Germany and France, notably from the Bill & Melinda Gates foundation and German retail giant Edeka.

When it comes to profits, it’s again all down to technology, said Muller.

The company is expanding to Germany and France
The company is expanding to Germany and France.

“Basically, the level of automation determines our level of profitability,” he said.

“Today, we have about 30 percent automated in Holland. We will grow to 100 percent in a couple of years’ time,” with Germany and France following soon behind.

So far, Picnic is mainly operating in the northern French city of Lille and the greater Paris suburbs. Central Paris is a “big opportunity but also has some of the worst traffic jams”, said Borgoltz.

“We will go to Paris but we have to find the right moment.”

Muller has ambitions to spread the company further. “Well, there are 183 countries in the world,” he jokes when asked where Picnic will expand to next.

But for the moment, he said the firm would consolidate its activities in Germany and France before looking further afield—not ruling out a push outside Europe.

© 2024 AFP

Citation:
Dutch app supermarket boss eyes tech boom in online delivery (2024, June 23)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-dutch-app-supermarket-boss-eyes.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link