5 Comments
Nov 21, 2023·edited Nov 21, 2023Liked by Sharon Goldman

Seems like AI is suffering from a human-alignment problem 👀

Expand full comment
Nov 23, 2023·edited Nov 23, 2023

Relevant: From Henry Farrell's substack: https://www.programmablemutter.com/p/look-at-scientology-to-understand (on the relation with EA as well)

Expand full comment

Very interesting. Thank you for sharing! I’m curious: could you please write more about what AI risks and threats participants were identifying, their timeframe, and their impact on businesses/industries? So far, I only hear speculation or implausible things about that—many thanks for considering my request.

Expand full comment
author

One thing I will be writing about is the risk of the theft or leaking of the LLM model weights by China or other nation-state actors -- that is a big concern. One CISO told me that it keeps them up at night worrying about that because access to the weights would allow developers to potentially develop an equally powerful model with the same recipe.

Expand full comment

Absolutely, that's concerning. I'm eagerly awaiting your next pieces on this topic! Also, have there been any significant concerns regarding risks or safety threats to businesses and their customers or users? Have you learned any substantial information on this?

Expand full comment