cupure logo
trumptradetariffsdealtrumpschinabankmajorsummertariff

AI firms warned to calculate threat of super intelligence or risk it escaping human control

AI safety campaigner calls for existential threat assessment akin to Oppenheimer’s calculations before first nuclear testArtificial intelligence companies have been urged to replicate the safety calculations that underpinned Robert Oppenheimer’s first nuclear test before they release all-powerful systems. Max Tegmark, a leading voice in AI safety, said he had carried out calculations akin to those of the US physicist Arthur Compton before the Trinity test and had found a 90% probability that a highly advanced AI would pose an existential threat. The US government went ahead with Trinity in 1945, after being reassured there was a vanishingly small chance of an atomic bomb igniting the atmosphere and endangering humanity.In a paper published by Tegmark and three of his students at the Massachusetts Institute of Technology (MIT), they recommend calculating the “Compton constant” – defined in the paper as the probability that an all-powerful AI escapes human control. In a 1959 interview with the US writer Pearl Buck, Compton said he had approved the test after calculating the odds of a runaway fusion reaction to be “slightly less” than one in three million. Continue reading...

Comments

Similar News

Business News