Very interesting. Thank you for sharing! I’m curious: could you please write more about what AI risks and threats participants were identifying, their timeframe, and their impact on businesses/industries? So far, I only hear speculation or implausible things about that—many thanks for considering my request.
One thing I will be writing about is the risk of the theft or leaking of the LLM model weights by China or other nation-state actors -- that is a big concern. One CISO told me that it keeps them up at night worrying about that because access to the weights would allow developers to potentially develop an equally powerful model with the same recipe.
Absolutely, that's concerning. I'm eagerly awaiting your next pieces on this topic! Also, have there been any significant concerns regarding risks or safety threats to businesses and their customers or users? Have you learned any substantial information on this?
Seems like AI is suffering from a human-alignment problem 👀
Relevant: From Henry Farrell's substack: https://www.programmablemutter.com/p/look-at-scientology-to-understand (on the relation with EA as well)
Very interesting. Thank you for sharing! I’m curious: could you please write more about what AI risks and threats participants were identifying, their timeframe, and their impact on businesses/industries? So far, I only hear speculation or implausible things about that—many thanks for considering my request.
One thing I will be writing about is the risk of the theft or leaking of the LLM model weights by China or other nation-state actors -- that is a big concern. One CISO told me that it keeps them up at night worrying about that because access to the weights would allow developers to potentially develop an equally powerful model with the same recipe.
Absolutely, that's concerning. I'm eagerly awaiting your next pieces on this topic! Also, have there been any significant concerns regarding risks or safety threats to businesses and their customers or users? Have you learned any substantial information on this?