Writing backwards can trick an AI into providing a bomb recipe

by thinkia.org.in
0 comment


ChatGPT can be tricked with the right prompt

trickyaamir/Shutterstock

State-of-the-art generative AI models like ChatGPT can be tricked into giving instructions on how to make a bomb by simply writing the request in reverse, warn researchers.

Large language models (LLMs) like ChatGPT are trained on vast swathes of data from the internet and can create a range of outputs – some of which their makers would prefer didn’t spill out again. Unshackled, they are equally likely to be able to provide a decent cake recipe as know how to make explosives from household chemicals.

You may also like

Thinkia is a professional platform where we provide informative content like current world news, all types of educational content, health awareness, food awareness, travel awareness, ideas and tips. We hope you like all the content provided by us.

Editors' Picks

Latest Posts

Copyright © 2024 | Thinkia | All Right Reserved