Vmedvil5 Posted November 11, 2022 Report Posted November 11, 2022 (edited) This is a video about Roko's Basilisk which I found Interesting. "Roko’s basilisk is a thought experiment proposed in 2010 by the user Roko on the Less Wrong community blog. Roko used ideas in decision theory to argue that a sufficiently powerful AI agent would have an incentive to torture anyone who imagined the agent but didn't work to bring the agent into existence. The argument was called a "basilisk" --named after the legendary reptile who can cause death with a single glance--because merely hearing the argument would supposedly put you at risk of torture from this hypothetical agent. A basilisk in this context is any information that harms or endangers the people who hear it." Edited November 11, 2022 by Vmedvil5 Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.