Marc Andreessen's techno-optimism: The other side of the AI doomer coin
As the AI prophets on both sides pontificate, the pragmatists are left unheard
TG-AI-F! Here, a few extra thoughts on the AI week that was. Subscribe to get these (hopefully sparkling) missives directly in your inbox, and also just to stay in touch (let me know what you think!). đ ~ Sharon (P.S. For my regular AI news and trend takes, find me on VentureBeat)
When I first read Marc Andreessenâs new blog post this week, I chuckled. I laughed. I slapped my knee. What a hoot!
I hear many saying âHuh?â The 5,000 word blog post, written by the co-founder of VC firm Andreessen Horowitz, wasnât meant to be funny, right? Oh, come on â just check out the hilariously cocky, arrogant, self-serving, navel-gazing, prophetic tone of the piece, and then imagine Andreessen dressed for Halloween as a biblical prophet carrying a scroll, or a couple of stone tablets.
âWe believe that since human wants and needs are infinite, economic demand is infinite, and job growth can continue foreverâ â ha! đ
âWe believe intelligence is the ultimate engine of progress. Intelligence makes everything better.â â oh, lordy! đ
âWe believe we should place intelligence and energy in a positive feedback loop, and drive them both to infinity.â â lol! Infinity, you say? đ
âWe believe the ultimate mission of technology is to advance life both on Earth and in the stars.â â omfg! đ
âWe believe technology is liberatory. Liberatory of human potential. Liberatory of the human soul, the human spirit. Expanding what it can mean to be free, to be fulfilled, to be alive.â â Oh yes, please! Do liberate my soul! I canât wait! đ
Of course, the reason the post is so funny to me â and also, to be honest, more than a little frightening â is that itâs so ridiculously extreme. Itâs why I giggled at OpenAIâs February manifesto, Planning for AGI and Beyond, with its grand message about how AGI could âempower humanity to maximally flourish in the universeâ â as I said back then, how about just figuring out how to keep Bingâs Sydney from having a major meltdown?
So, too, is the AI doomer narrative that is, basically, the other side of the coin. Iâve been reporting on this âthin lineâ between AI doom and hype for many months now â including a piece in May where I spoke to several experts about the current âdoomerâ narrative focused on existential future risk from runaway artificial general intelligence (AGI). These include Marchâs AI pause letter and Mayâs Statement on AI Risk, signed by hundreds of experts including the CEOs of OpenAI, DeepMind and Anthropic, which warned of a ârisk of extinctionâ from advanced AI if its development is not properly managed.
Itâs gotten to the point where even the doomers donât want to be called doomers: Just the other day, Wired reported on Yoshua Bengioâs latest comments on AIâs existential risks: âSome might say, âOh, Yoshua is trying to scare.â But I'm a positive person. I'm not a doomer like people may call me. There's a problem, and I'm thinking about solutions. I want to discuss them with others who may have something to bring. The research on improving AI's capabilities is racing ahead because there's now a lotâa lotâmore money invested in this. It means mitigating the largest risks is urgent.â
To be, the biggest problem around the pontificating of the AI prophets on both sides is that the AI pragmatists â the less-funny, less-quotable, less-verbose majority â are left unheard. By its very nature, pragmatism is not as âsexyâ as extremism. The middle ground, said Louise Nevelson, is âthe most boring place in the world.â
But when it comes to the rapidly-developing world of AI, maybe the middle ground is the only solid place to land at the moment. Iâd like to raise up the voices of the pragmatists â theyâre not as knee-slapping funny, but they make a whole lot of sense to me.
As NYU professor and AI researcher Kyunghyun Cho said to me back in June, âIâm disappointed by a lot of this discussion about existential risk; now they even call it literal âextinction.â Itâs sucking the air out of the room.â
And Sara Hooker, head of the nonprofit Cohere for AI and former research scientist at Google Brain, told me back then that âItâs almost a topsy-turvy world â in the public discourse, [x-risk] is being treated as if itâs the dominant view of this technology.â She said her main concern is that it âminimizes a lot of conversations around present day risk and in terms of allocation of resources.â In addition, she said, âI wish more of the attention was placed on the current risk of our models that are deployed every day and used by millions of people. Because for me, thatâs what a lot of researchers work on day in, day out and it displaces visibility and resources for the efforts of many researchers who work on safety.â
When it comes to Andreessenâs manifesto, Gary Marcusâs post on X (formerly known as Twitter) about Andreessenâs manifesto the other day jumped out at me:
I admire his unbridled optimism, his yearning for markets to be free, his longing for technologies that could be without restraint, his ability to unabashedly cite 56 of his allies and nobody who disagrees, and, Nixon-like, to confidently declare anyone who disagrees with him to be both immoral and an Enemy, and above all else his absolute certainty in his own ideas. But not his lack of respect for data, for alternative perspectives, or for people less wealthy or dogmatic than himself. The very idea that the world might focus on sustainable development goals for the sake of the less fortunate puts him in absolute agony. His statement of beliefs (which includes an assertion that he believes in the scientific method) is declaration, not science, nor reasoned, steel-manned argument.
And I loved the pushback from the folks at TechCrunch, who called out the biases that Andreessen brings to the table â âmainly that he is absurdly wealthy (worth an estimated $1.35 billion as of September 2022) and that his absurd wealth is largely tied to the investments of his namesake tech venture fund. So, he inherently is going to push for his techno-optimist vision, because the success of tech companies means he gets even more rich.â
I will repeat what I said back in June: The future of AI is unknown. Thatâs the problem with AI prophets and their outsized role today in influencing AI policy. Iâd like to hear more from the AI pragmatists â I think they will be more helpful in the short and long-term. But thanks anyway, Marc â I had a good laugh.
The danger is, as I think Gary Marcus pointed out, that this "stupid" AI may still be imposed on society through social engineering and it seems that is indeed the direction that we are heading, considering all the media attention and ridiculously overhyped claims by prominent leaders in the AI field. The social engineering goal is not even kept hidden any more as they've released their Techno-Optimist manifesto (with Marc Andreessen as the author), which is unsurprisingly very similar in structure, style and content to the Communist manifesto by Marx and Engels from 1848, and we know painfully well from history what the political end goals of that manifesto were - the establishment of an authoritarian regime. To hide that similarity to communists, the techno-optimists cry "Thief!" in their manifesto by branding their enemies as communists. Their proclaimed goals of a market and technological development absolutely free from government regulation, which are the first steps in dismantling the state as an institution, have only one logical conclusion as we know again painfully well from history (power does not tolerate a vacuum) - the establishment of a feudal regime.