Recently, I was in a debate about this question organized by the USTP,
“Is artificial general intelligence likely to be benevolent and beneficial to human well-being without special safeguards or restrictions on its development?”
That really went to my position on AGI and Existential Risk.
Continue reading “The Case for the Offspring of the Humanity”