On May 13, Business Insider reported that Daniel Kokotajlo, a former OpenAI researcher and current head of the AI Futures Project, said that the AI industry is racing to build systems that each company does not yet fully understand or control.

Caption: Kokotailo
In an interview with a "Business Insider" reporter in May last year, Cocotailo explained that the core issue facing AI companies is the "alignment" problem, that is, efforts to ensure that future AI systems can reliably follow human instructions and values, even if their capabilities in many fields have surpassed humans.
He said that researchers currently do not fully understand how advanced AI models make decisions internally, and this uncertainty makes it difficult to ensure that future systems are truly "aligned" and stably perform the goals humans want it to accomplish.
"It's sort of an open secret, but we don't have a real viable solution yet," he said of implementing AI alignment.
Kokotailo left the company after working at OpenAI from 2022 to 2024 working on predictive research, studying how quickly AI systems might advance and what economic, political and security risks might arise as companies build more powerful models.
Today, through his nonprofit research organization, the Future of AI Project, he focuses on similar topics, focusing on analyzing how quickly AI systems might develop and what risks will arise if companies continue to prioritize speed and competition.
“Once superintelligence is created, humans will no longer be the leader of this planet, at least not the default leader,” said Kokotailo.
Cocotailo said many people still underestimate the speed of AI development because discussions often sound like science fiction. His warning comes as AI companies continue to invest billions of dollars in more powerful models and larger data centers.