Most AI systems have safety protocols that will not allow answers about, for example, how to build a bomb, write a virus and infect Microsoft Word, or murder someone. But by confusing it, it was giving such answers.
Hearing about AGI and ASI and how close we are to them coming true is scary enough. But can you imagine a human or even God like model that got confused by a person with malicious intent and it started putting out evil information but we would believe it because most all the other answers were spot on?
Are we excitedly opening a box, thinking we are getting a nice present, but in fact it's a Pandora's box of doom and misery for all humans?
Mark
Lightin' fuses is for blowin' stuff togethah.
⢠Chief Machine Learning Engineer @ ARIA Research (Sydney, AU)
⢠Lead GenAI SEO Campaign Engineer @ Kiteworks, Inc. (SF, US)
Lightin' fuses is for blowin' stuff togethah.
People Are Building Real Wealth Through Gold And Silver!
Lightin' fuses is for blowin' stuff togethah.
Atention : Not a quick rich scheme ,takes patience, consistency and some work also