Betting Big on the Future: The $30 Billion Quest for Safe Superintelligence


Ilya Sutskever, co-founder and former chief scientist of OpenAI, has embarked on a new venture, Safe Superintelligence, which has raised eyebrows with its ambitious goal and significant funding despite not having a product. The company, valued at $30 billion, aims to develop a “safe superintelligence” as its first and only product, a concept that remains speculative and distant in the field of artificial intelligence (AI). Despite skepticism around the feasibility of creating artificial general intelligence (AGI) or systems that surpass human cognition anytime soon, Safe Superintelligence has attracted $1 billion in investment, including from major firms like Andreessen Horowitz and Sequoia Capital. The company’s approach, focusing solely on achieving superintelligence without the immediate pressure of releasing intermediate products, is unconventional and highly speculative, reflecting broader debates and excitement in the AI community about the potential and timeline for reaching AGI.
Read more at Futurism…