AIs are much more likely to misinform other people if educated on human comments 

0
21


Illustration of a chatbot icon on a digital blue wavy background

Striving to get a hold of solutions that please people might make chatbots much more likely to drag the wool over our eyes

JuSun/Getty Pictures

Giving AI chatbots human comments on their responses turns out to cause them to higher at giving convincing, however flawed, solutions.

The uncooked output of huge language fashions (LLMs), which energy chatbots like ChatGPT, can include biased, destructive or inappropriate data, and their taste of interplay can appear unnatural to people. To get round this, builders frequently get other people to guage a style’s responses after which fine-tune it in line with this comments.



Supply hyperlink

LEAVE A REPLY

Please enter your comment!
Please enter your name here