Addressing the risks of artificial intelligence requires collaborative research focusing on safety and ethical development.
Experts from various safety-related industries should contribute their knowledge and experience to ensure AI systems are safe.
Regulators must possess the authority to recall unsafe AI models and prevent potential harm to humanity.
Proactive risk assessment and pre-market controls are crucial to mitigate potential dangers of AI as it develops.