Google is launching an AI-powered chatbot to rival OpenAI's GPT-3, which is one of several advances in AI that researchers are increasingly concerned about. The safety of AI is unpredictable as models become more advanced and scalable, with experts warning of risks such as misinformation, bias, and the potential for catastrophic disaster if the technology is misused.
Concerns range from sophisticated lies posted on social media to terrorist organizations hacking into government websites or producing biochemicals with AI's help. One worry is that self-improving models could create a feedback loop that makes them smarter and more efficient, leading to unpredictable consequences.
Safety techniques such as reinforcement learning from human feedback (RHFS) and red teaming are being used to make AI models safer, but there is no guarantee they will always work. Experts propose setting up a standards body to test the safety of AI labs and investing more money in research on AI safety to better understand and mitigate the risks. Despite the potential risks, AI also holds tremendous promise in areas such as drug development, personalized education, and climate change technology.
Source: The Economist