Wednesday, January 8, 2025
Home Blog Page 1488

Research into ‘hallucinating’ generative models advances reliability of artificial intelligence

0
Research into 'hallucinating' generative models advances reliability of artificial intelligence


Major research into 'hallucinating' generative models advances reliability of artificial intelligence
Overview of semantic entropy and confabulation detection. Credit: Nature (2024). DOI: 10.1038/s41586-024-07421-0

Researchers from the University of Oxford have made a significant advance toward ensuring that information produced by generative artificial intelligence (AI) is robust and reliable.

In a new study published in Nature, they demonstrate a novel method to detect when a large language model (LLM) is likely to “hallucinate” (i.e., invent facts that sound plausible but are imaginary).

This advance could open up new ways to deploy LLMs in situations where “careless errors” are costly such as legal or medical question-answering.

The researchers focused on hallucinations where LLMs give different answers each time it is asked a question—even if the wording is identical—known as confabulating.

“LLMs are highly capable of saying the same thing in many different ways, which can make it difficult to tell when they are certain about an answer and when they are literally just making something up,” said study author Dr. Sebastian Farquhar, from the University of Oxford’s Department of Computer Science.

“With previous approaches, it wasn’t possible to tell the difference between a model being uncertain about what to say versus being uncertain about how to say it. But our new method overcomes this.”

To do this, the research team developed a method grounded in statistics and using methods that estimate uncertainty based on the amount of variation (measured as entropy) between multiple outputs.

Their approach computes uncertainty at the level of meaning rather than sequences of words, i.e., it spots when LLMs are uncertain about the actual meaning of an answer, not just the phrasing. To do this, the probabilities produced by the LLMs, which state how likely each word is to be next in a sentence, are translated into probabilities over meanings.

The new method proved much better at spotting when a question was likely to be answered incorrectly than all previous methods, when tested against six open-source LLMs (including GPT-4 and LLaMA 2).

This was the case for a wide range of different datasets including answering questions drawn from Google searches, technical biomedical questions, and mathematical word problems. The researchers even demonstrated how semantic entropy can identify specific claims in short biographies generated by ChatGPT that are likely to be incorrect.

“Our method basically estimates probabilities in meaning-space, or ‘semantic probabilities,'” said study co-author Jannik Kossen (Department of Computer Science, University of Oxford). “The appeal of this approach is that it uses the LLMs themselves to do this conversion.”

By detecting when a prompt is likely to produce a confabulation, the new method can help make users of generative AI aware when the answers to a question are probably unreliable, and to allow systems built on LLMs to avoid answering questions likely to cause confabulations.

A key advantage to the technique is that it works across datasets and tasks without a priori knowledge, requiring no task-specific data, and robustly generalizes to new tasks not seen before. Although it can make the process several times more computationally costly than just using a generative model directly, this is clearly justified when accuracy is paramount.

Currently, hallucinations are a critical factor holding back wider adoption of LLMs like ChatGPT or Gemini. Besides making LLMs unreliable, for example by presenting inaccuracies in news articles and fabricating legal precedents, they can even be dangerous, for example when used in medical diagnosis.

The study’s senior author Yarin Gal, Professor of Computer Science at the University of Oxford and Director of Research at the UK’s AI Safety Institute, said, “Getting answers from LLMs is cheap, but reliability is the biggest bottleneck. In situations where reliability matters, computing semantic uncertainty is a small price to pay.”

Professor Gal’s research group, the Oxford Applied and Theoretical Machine Learning group, is home to this and other work pushing the frontiers of robust and reliable generative models. Building on this expertise, Professor Gal now acts as Director of Research at the UK’s AI Safety Institute.

The researchers highlight that confabulation is just one type of error that LLMs can make. “Semantic uncertainty helps with specific reliability problems, but this is only part of the story,” explained Dr. Farquhar.

“If an LLM makes consistent mistakes, this new method won’t catch that. The most dangerous failures of AI come when a system does something bad but is confident and systematic. There is still a lot of work to do.”

More information:
Sebastian Farquhar et al, Detecting hallucinations in large language models using semantic entropy, Nature (2024). DOI: 10.1038/s41586-024-07421-0

Karin Verspoor, ‘Fighting fire with fire’ — using LLMs to combat LLM hallucinations, Nature (2024). DOI: 10.1038/d41586-024-01641-0 , doi.org/10.1038/d41586-024-01641-0

Citation:
Research into ‘hallucinating’ generative models advances reliability of artificial intelligence (2024, June 20)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-hallucinating-generative-advances-reliability-artificial.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Keeping your data from Apple is harder than expected, finds study

0
Keeping your data from Apple is harder than expected, finds study


Keeping your data from Apple is harder than expected
The contrast between the steps users experience and the data handling processes involved at various stages of the device setup process. The user begins the process of setting up their device by purchasing a new device. Steps 1 – 18 explain the steps required for a complete setup of a user’s device, for instance, a MacBook (macOS 10.15+). Yellow bubbles denoted by letters A – H are summaries of Apple’s official privacy policy statement [3]. Bubbles A – H highlight examples of personal information collection occurring at various stages of the setup process. In addition to other data handling procedures, such as the location of the information stored (e.g., in F), users’ fingerprints are stored locally on the device. We note that there may be slight variations between the order of the presentations of these settings in iOS and macOS. Additionally, Siri (step 15) is not prompted during device setup in iPhone (iOS 14.0) The order of the diagram is based on the order of presentation of the settings on macOS. Credit: Privacy of Default Apps in Apple’s Mobile Ecosystem (2024)

“Privacy. That’s iPhone,” the slogan proclaims. New research from Aalto University begs to differ.

Study after study has shown how voluntary third-party apps erode people’s privacy. Now, for the first time, researchers at Aalto University have investigated the privacy settings of Apple’s default apps, the ones that are pretty much unavoidable on a new device, be it a computer, tablet, or mobile phone.

The researchers will present their findings in mid-May at the CHI conference, and the peer-reviewed research paper is already available online.

“We focused on apps that are an integral part of the platform and ecosystem. These apps are glued to the platform, and getting rid of them is virtually impossible,” says Associate Professor Janne Lindqvist, head of the computer science department at Aalto.

The researchers studied eight apps: Safari, Siri, Family Sharing, iMessage, FaceTime, Location Services, Find My and Touch ID. They collected all publicly available privacy-related information on these apps, from technical documentation to privacy policies and user manuals.

The fragility of the privacy protections surprised even the researchers.

“Due to the way the user interface is designed, users don’t know what is going on. For example, the user is given the option to enable or not enable Siri, Apple’s virtual assistant. But enabling only refers to whether you use Siri’s voice control. Siri collects data in the background from other apps you use, regardless of your choice, unless you understand how to go into the settings and specifically change that,” says Lindqvist.

Participants weren’t able to stop data sharing in any of the apps

In practice, protecting privacy on an Apple device requires persistent and expert clicking on each app individually. Apple’s help falls short.

“The online instructions for restricting data access are very complex and confusing, and the steps required are scattered in different places. There’s no clear direction on whether to go to the app settings, the central settings—or even both,” says Amel Bourdoucen, a doctoral researcher at Aalto.

In addition, the instructions didn’t list all the necessary steps or explain how collected data is processed.

The researchers also demonstrated these problems experimentally. They interviewed users and asked them to try changing the settings.

“It turned out that the participants weren’t able to prevent any of the apps from sharing their data with other applications or the service provider,” Bourdoucen says.

Finding and adjusting privacy settings also took a lot of time. “When making adjustments, users don’t get feedback on whether they’ve succeeded. They then get lost along the way, go backwards in the process and scroll randomly, not knowing if they’ve done enough,” Bourdoucen says.

In the end, Bourdoucen explains, the participants were able to take one or two steps in the right direction, but none succeeded in following the whole procedure to protect their privacy.

Running out of options

If preventing data sharing is difficult, what does Apple do with all that data?

It’s not possible to be sure based on public documents, but Lindqvist says it’s possible to conclude that the data will be used to train the artificial intelligence system behind Siri and to provide personalized user experiences, among other things.

Many users are used to seamless multi-device interaction, which makes it difficult to move back to a time of more limited data sharing. However, Apple could inform users much more clearly than it does today, says Lindqvist. The study lists a number of detailed suggestions to clarify privacy settings and improve guidelines.

For individual apps, Lindqvist says that the problem can be solved to some extent by opting for a third-party service. For example, some participants in the study had switched from Safari to Firefox.

Lindqvist can’t comment directly on how Google’s Android works in similar respects, as no one has yet done a similar mapping of its apps. But past research on third-party apps does not suggest that Google is any more privacy-conscious than Apple.

So what can be learned from all this—are users ultimately facing an almost impossible task?

“Unfortunately, that’s one lesson,” says Lindqvist.

More information:
Paper: Privacy of Default Apps in Apple’s Mobile Ecosystem

Provided by
Aalto University


Citation:
Keeping your data from Apple is harder than expected, finds study (2024, April 3)
retrieved 24 June 2024
from https://techxplore.com/news/2024-04-apple-harder.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Dutch app supermarket boss eyes tech boom in online delivery

0
Dutch app supermarket boss eyes tech boom in online delivery


Online deliveries will see a 'massive' boost from AI, says Picnic boss Michiel Muller
Online deliveries will see a ‘massive’ boost from AI, says Picnic boss Michiel Muller.

Advances in artificial intelligence are poised to drive a “massive” boom in online grocery deliveries, according to the head of Picnic, a Dutch app-only supermarket rapidly expanding into Germany and France.

Picnic has disrupted the Dutch supermarket landscape with its offer of free delivery in a time window of 20 minutes—made possible by squeezing efficiency out of huge amounts of data.

The firm already uses AI for a vast range of operations, explained CEO Michiel Muller, 59, at the firm’s 43,000-square-meter distribution hub in Utrecht, central Netherlands.

“For instance, predicting how many bananas we will sell in three weeks’ time. Or what happens when the weather is good or bad. Or doing our whole route planning,” he told AFP.

As technology improves and datasets grow, predictions will become more accurate, further reducing food waste and offering even more precise time slots for customers, he forecast.

“Don’t forget that supermarkets weren’t there 60 years ago. You only had smaller stores. So there’s always a movement around new technology and new ways of delivering goods.”

“The supermarket will remain. That’s for sure. Stores will remain. But the online part will grow massively,” he said.

Picnic has developed its own in-house software to fine-tune every element of the delivery process, from processing and packing stock at the warehouse to the famously complex “last mile” of dropping off the goods.

The warehouse has 14 kilometres of conveyor belts
The warehouse has 14 kilometers of conveyor belts.

Delivery times are calculated with extraordinary precision, with reams of information crunched by 300 data analysts and 300 software engineers at Picnic’s headquarters.

“We know exactly how long it takes to walk around the vehicle and when it’s dark outside, we add six seconds to the delivery time,” said Muller.

Unlike a physical supermarket, every order comes through on the app, so the firm knows exactly what it needs to order, deliver, and how long that should take.

The firm estimates this results in seven times less food waste than at regular supermarkets.

“There’s not a single baguette that is ordered and not delivered,” said Gregoire Borgoltz, head of Picnic’s operations in France.

The firm’s drivers in the ubiquitous white Picnic vans receive a rating after every trip based on their driving, even assessing whether they have sped too fast around corners.

Each delivery is meticulously tracked
Each delivery is meticulously tracked.

‘Level of automation’

The huge investments required in bespoke software, plus the firm’s distribution hubs with 14 kilometers (nine miles) of conveyor belts, means profits have been hard to come by.

Sales have risen from 10 million euros in 2016 to 1.25 billion in 2023, with staff levels soaring from 100 employees to 17,000 over the same period.

But Muller said the firm suffered losses of “around 200 million euros” last year due to expanding in Germany—opening slots in Berlin, Hamburg and Hannover.

For the first time since its 2015 founding, it finally turned in a gross profit this year in its home market. “It took eight years to be profitable in the Netherlands,” he said.

Earlier this year, the firm raised 355 million euros from investors to fund its push into Germany and France, notably from the Bill & Melinda Gates foundation and German retail giant Edeka.

When it comes to profits, it’s again all down to technology, said Muller.

The company is expanding to Germany and France
The company is expanding to Germany and France.

“Basically, the level of automation determines our level of profitability,” he said.

“Today, we have about 30 percent automated in Holland. We will grow to 100 percent in a couple of years’ time,” with Germany and France following soon behind.

So far, Picnic is mainly operating in the northern French city of Lille and the greater Paris suburbs. Central Paris is a “big opportunity but also has some of the worst traffic jams”, said Borgoltz.

“We will go to Paris but we have to find the right moment.”

Muller has ambitions to spread the company further. “Well, there are 183 countries in the world,” he jokes when asked where Picnic will expand to next.

But for the moment, he said the firm would consolidate its activities in Germany and France before looking further afield—not ruling out a push outside Europe.

© 2024 AFP

Citation:
Dutch app supermarket boss eyes tech boom in online delivery (2024, June 23)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-dutch-app-supermarket-boss-eyes.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Could a cockpit warning system prevent close calls between planes at US airports?

0
Could a cockpit warning system prevent close calls between planes at US airports?


After several near-misses on airport runways, a tech company revives work on a hazard-warning system
Honeywell test pilots Joe Duval, left, and Clint Coatney fly a Boeing 757 test aircraft demonstrating runway hazard warning systems over the airport in Tyler, Texas, Tuesday, June 4, 2024. Credit: AP Photo/LM Otero

As a Delta Air Lines jet began roaring down a runway, an air traffic controller at New York’s John F. Kennedy International Airport suddenly blurted out an expletive, then ordered the pilots to stop their takeoff roll.

The controller saw an American Airlines plane mistakenly crossing the same runway, into the path of the accelerating Delta jet. JFK is one of only 35 U.S. airports with the equipment to track planes and vehicles on the ground. The system alerted the airport control tower to the danger, possibly saving lives last year.

The National Transportation Safety Board and many independent experts say pilots should get warnings without waiting precious seconds to get word from controllers. Just last week, the NTSB recommended that the Federal Aviation Administration collaborate with manufacturers to develop technology for alerting pilots directly.

Honeywell International, a conglomerate with a big aerospace business, has been working on such an early-warning system for about 15 years and thinks it is close to a finished product. The company gave a demonstration during a test flight last week. As pilot Joe Duval aimed a Boeing 757 for a runway in Tyler, Texas, a warning appeared on his display and sounded in the cockpit: “Traffic on runway!”

After several near-misses on airport runways, a tech company revives work on a hazard-warning system
Honeywell test pilots Joe Duval, left, and Clint Coatney fly a Boeing 757 test aircraft demonstrating runway hazard warning systems over the airport in Tyler, Texas, Tuesday, June 4, 2024. Credit: AP Photo/LM Otero

The system had detected a business jet that was just appearing as a speck on the runway about a mile away—ground the Boeing would cover in a matter of seconds.

Duval tilted the plane’s nose up and pushed the throttle forward into a G-force-inducing climb, safely away from the Dassault Falcon 900 below.

Honeywell officials claim their technology would have alerted the Delta pilots who had the January 2023 near-miss at JFK 13 seconds before the air traffic controller screamed the expletive and told them to stop their takeoff. Merely removing the need for a controller to relay the warning from ground-based systems could be critical.

“Those are microseconds, but they are enough to make a difference,” Michael McCormick, a former FAA official who now teaches air-traffic management at Embry-Riddle Aeronautical University in Florida, said. “Providing alerts directly to the cockpit is the next step. This puts the tool in the hands of the pilot who actually has control of the aircraft. This technology is a game-changer.”

After several near-misses on airport runways, a tech company revives work on a hazard-warning system
A jet sits on a runway creating a hazard seen from a Boeing 757 test aircraft on landing approach to demonstrate runway hazard warning systems over the airport in Tyler, Texas, Tuesday, June 4, 2024. Credit: AP Photo/LM Otero

Honeywell plans to layer the cockpit-alert system on top of technology that is already in wide use and warns pilots if they fly too low.

Incidents like the one at JFK are called runway incursions—a plane or ground vehicle is on a runway when it shouldn’t be. Some incursions are caused by pilots entering a runway without clearance from air traffic controllers. In other cases, there isn’t enough spacing between planes that are landing or taking off, which can be the fault of pilots or controllers.

The number of incursions fell during the coronavirus pandemic and has not returned to the recent peaks of more than 2,000 incidents recorded in both 2016 and 2017. However, the most serious ones—where a collision was narrowly avoided or there was a “significant potential” for a crash—have been rising since 2017. There were 23 in the United States last year, up from 16 in 2022, according to FAA statistics.

Reducing incursions has always been a priority for FAA “because that’s where the greatest risk lies in the aviation system,” said McCormick, the former FAA official.

After several near-misses on airport runways, a tech company revives work on a hazard-warning system
Honeywell test pilot Joe Duval, left, pulls a Boeing 757 test aircraft out of a landing approach demonstrating runway hazard warning systems over the airport in Tyler, Texas, Tuesday, June 4, 2024. Credit: AP Photo/LM Otero

The worst accident in aviation history occurred in 1977 on the Spanish island of Tenerife, when a KLM 747 began its takeoff roll while a Pan Am 747 was still on the runway; 583 people died when the planes collided in thick fog.

Earlier this year, a Japan Airlines jet landing in Tokyo collided with a Japanese coast guard plane that was preparing to take off. Five crew members on the coast guard plane died, but all 379 people on board the airliner escaped before it was destroyed by fire.

The FAA has paid for airport improvements designed to reduce incursions, such as reconfiguring confusing taxiways. It has also paid for technology to alert people in the control tower when a plane is lined up to land on a taxiway instead of a runway.

That type of landing error nearly happened in 2017 in San Francisco, when an Air Canada jet pulled up at the last second to avoid crashing into four jets on the taxiway that were carrying about 1,000 passengers between them.

After several near-misses on airport runways, a tech company revives work on a hazard-warning system
Honeywell test pilot Joe Duval pulls a Boeing 757 test aircraft out of a landing approach demonstrating runway hazard warning systems at the airport in Tyler, Texas, Tuesday, June 4, 2024. Credit: AP Photo/LM Otero

The FAA is also rolling out more simulators for controllers to practice directing traffic during times of low visibility. The NTSB last week recommended that the FAA require annual refresher training. The suggestion came after the NTSB determined that a controller who nearly caused a catastrophic crash between a FedEx plane and a Southwest Airlines jet during heavy fog in Austin, Texas, last year had not trained for low-visibility conditions in at least two years.

The NTSB’s examination of the February 2023 close call in Austin also renewed attention on technology to provide cockpit warnings of possible incursions and included a brief reference to the system Honeywell is developing. The FAA has not certified the system, which Honeywell calls “Surf-A” for surface alerts, but the company thinks certification could happen in the next 18 months.

The FAA’s best technology against runway incursions is a system called ASDE-X that lets controllers track planes and vehicles on the ground. But it is expensive, so it’s only at 35 of the 520 U.S. airports with a control tower.

“Some people thought ASDE-X was the solution,” former NTSB Chairman Robert Sumwalt said. “The problem is, there are a lot more than 35 air-carrier airports. A product (that warns pilots in the cockpit) goes to every airport that the airplane goes to.”

After several near-misses on airport runways, a tech company revives work on a hazard-warning system
Honeywell test pilot Joe Duval pulls a Boeing 757 test aircraft out of a landing approach demonstrating runway hazard warning systems over the airport in Tyler, Texas, Tuesday, June 4, 2024. Credit: AP Photo/LM Otero

Honeywell, which is based in Charlotte, North Carolina, began working on a cockpit warning system around 2008 and tried to convince airlines to support the idea, but it says it found no takers. The company suspended the project when the pandemic devastated aviation in 2020.

Then, as air travel recovered early last year, there were a series of high-profile close calls between planes at major U.S. airports, including the ones at JFK and Austin–Bergstrom International Airport.

“Traffic was picking up. You were having more of the near-misses,” said Thea Feyereisen, part of the Honeywell team working on the system. The timing was right to revive the warning system.

“Previously, when we would talk to airlines, they were not interested. Last year, we go talk to the airlines again, and now they’re interested,” she said.

  • After several near-misses on airport runways, a tech company revives work on a hazard-warning system
    A jet sits on a runway creating a hazard, seen from a Boeing 757 test aircraft cockpit demonstrating runway hazard warning systems, at the airport in Tyler, Texas, Tuesday, June 4, 2024. Credit: AP Photo/LM Otero
  • After several near-misses on airport runways, a tech company revives work on a hazard-warning system
    A jet crosses a runway creating a hazard, seen from a Boeing 757 test aircraft cockpit demonstrating runway hazard warning systems, at the airport in Tyler, Texas, Tuesday, June 4, 2024. Credit: AP Photo/LM Otero

Still, Honeywell doesn’t have a launch customer, and company officials won’t say how much it would cost to outfit a plane.

Feyereisen was asked if the system would have prevented the close calls in New York and Austin.

“What our lawyers tell us to say (is) we reduce the risk of a runway incursion. We provide the pilot more time to make a decision” whether to, for example, call off a landing and fly around the airport instead, she said. “Still, the pilot needs to make a decision.”

© 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.

Citation:
Could a cockpit warning system prevent close calls between planes at US airports? (2024, June 13)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-airport-runways-tech-company-revives.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

New technique makes lengthy privacy notices easier to understand by converting them into machine-readable formats

0
New technique makes lengthy privacy notices easier to understand by converting them into machine-readable formats


privacy notice
Credit: Pixabay/CC0 Public Domain

An Aston University researcher has suggested a more human-friendly way of reading websites’ long-winded privacy notices.

A team led by Dr. Vitor Jesus has developed a system of making them quicker and easier to understand by converting them into machine-readable formats. This technique could allow the browser to guide the user through the document with recommendations or highlights of key points.

Providing privacy information is one of the key requirements of the UK General Data Protection Regulation (GDPR) and the UK Data protection Act but trawling through them can be a tedious manual process.

In 2012, The Atlantic magazine estimated it would take 76 days per year to diligently read privacy notices.

Privacy notices let people know what is being done with their data, how it will be kept safe if it’s shared with anyone else and what will happen to it when it’s no longer needed.

However, the documents are written in non-computer, often legal language, so in the paper Feasibility of Structured, Machine-Readable Privacy Notices Dr. Jesus and his team explored the feasibility of representing privacy notices in a machine-readable format.

Dr. Jesus said, “The notices are essential to keep the public informed and data controllers accountable, however they inherit a pragmatism that was designed for different contexts such as software licenses or to meet the—perhaps not always necessary—verbose completeness of a legal contract.

“And there are further challenges concerning updates to notices, another requirement by law, and these are often communicated off-band, e.g., by email if a user account exists.”

Between August and September 2022, the team examined the privacy notices of 50 of the U.K.’s most popular websites, from global organizations such as Google.com to U.K. sites such as john-lewis.com. They covered a number of areas such as online services, news and fashion to be representative.

The researchers manually identified the notices’ apparent structure and noted commonly-themed sections, then designed a JavaScript Object Notation (JSON) schema which allowed them to validate, annotate, and manipulate documents.

After identifying an overall potential structure, they revisited each notice to convert them into a format that was machine readable but didn’t compromise both legal compliance and the rights of individuals.

Although there has been previous work to tackle the same problem, the Aston University team focused primarily on automating the policies rather than data collection and processing.

Dr. Jesus, who is based at the University’s College of Engineering and Physical Sciences said, “Our research paper offers a novel approach to the long-standing problem of the interface of humans and online privacy notices.

“As literature and practice, and even art, for more than a decade have identified, privacy notices are nearly always ignored and “accepted” with little thought, mostly because it is not practical nor user-friendly to depend on reading a long text simply to access, for example, a news website. Nevertheless, privacy notices are a central element in our digital lives, often mandated by law, and with dire, often invisible, consequences.”

The paper was published and won best paper at the International Conference on Behavioural and Social Computing, November 2023, now indexed at IEEE Xplore.

The team are now examining if AI can be used to further speed up the process by providing recommendations to the user, based on past preferences.

More information:
Vitor Jesus et al, Feasibility of Structured, Machine-Readable Privacy Notices, 2023 10th International Conference on Behavioural and Social Computing (BESC) (2024). DOI: 10.1109/BESC59560.2023.10386763

Provided by
Aston University


Citation:
New technique makes lengthy privacy notices easier to understand by converting them into machine-readable formats (2024, June 20)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-technique-lengthy-privacy-easier-machine.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link