Marc Andreessen's techno-optimism: The other side of the AI doomer coin
As the AI prophets on both sides pontificate, the pragmatists are left unheard
TG-AI-F! Here, a few extra thoughts on the AI week that was. Subscribe to get these (hopefully sparkling) missives directly in your inbox, and also just to stay in touch (let me know what you think!). 😊 ~ Sharon (P.S. For my regular AI news and trend takes, find me on VentureBeat)
When I first read Marc Andreessen’s new blog post this week, I chuckled. I laughed. I slapped my knee. What a hoot!
I hear many saying “Huh?” The 5,000 word blog post, written by the co-founder of VC firm Andreessen Horowitz, wasn’t meant to be funny, right? Oh, come on — just check out the hilariously cocky, arrogant, self-serving, navel-gazing, prophetic tone of the piece, and then imagine Andreessen dressed for Halloween as a biblical prophet carrying a scroll, or a couple of stone tablets.
“We believe that since human wants and needs are infinite, economic demand is infinite, and job growth can continue forever” — ha! 😂
“We believe intelligence is the ultimate engine of progress. Intelligence makes everything better.” — oh, lordy! 😂
“We believe we should place intelligence and energy in a positive feedback loop, and drive them both to infinity.” — lol! Infinity, you say? 😂
“We believe the ultimate mission of technology is to advance life both on Earth and in the stars.” — omfg! 😂
“We believe technology is liberatory. Liberatory of human potential. Liberatory of the human soul, the human spirit. Expanding what it can mean to be free, to be fulfilled, to be alive.” — Oh yes, please! Do liberate my soul! I can’t wait! 😂
Of course, the reason the post is so funny to me — and also, to be honest, more than a little frightening — is that it’s so ridiculously extreme. It’s why I giggled at OpenAI’s February manifesto, Planning for AGI and Beyond, with its grand message about how AGI could “empower humanity to maximally flourish in the universe” — as I said back then, how about just figuring out how to keep Bing’s Sydney from having a major meltdown?
So, too, is the AI doomer narrative that is, basically, the other side of the coin. I’ve been reporting on this “thin line” between AI doom and hype for many months now — including a piece in May where I spoke to several experts about the current ‘doomer’ narrative focused on existential future risk from runaway artificial general intelligence (AGI). These include March’s AI pause letter and May’s Statement on AI Risk, signed by hundreds of experts including the CEOs of OpenAI, DeepMind and Anthropic, which warned of a “risk of extinction” from advanced AI if its development is not properly managed.
It’s gotten to the point where even the doomers don’t want to be called doomers: Just the other day, Wired reported on Yoshua Bengio’s latest comments on AI’s existential risks: “Some might say, ‘Oh, Yoshua is trying to scare.’ But I'm a positive person. I'm not a doomer like people may call me. There's a problem, and I'm thinking about solutions. I want to discuss them with others who may have something to bring. The research on improving AI's capabilities is racing ahead because there's now a lot—a lot—more money invested in this. It means mitigating the largest risks is urgent.”
To be, the biggest problem around the pontificating of the AI prophets on both sides is that the AI pragmatists — the less-funny, less-quotable, less-verbose majority — are left unheard. By its very nature, pragmatism is not as “sexy” as extremism. The middle ground, said Louise Nevelson, is “the most boring place in the world.”
But when it comes to the rapidly-developing world of AI, maybe the middle ground is the only solid place to land at the moment. I’d like to raise up the voices of the pragmatists — they’re not as knee-slapping funny, but they make a whole lot of sense to me.
As NYU professor and AI researcher Kyunghyun Cho said to me back in June, “I’m disappointed by a lot of this discussion about existential risk; now they even call it literal “extinction.” It’s sucking the air out of the room.”
And Sara Hooker, head of the nonprofit Cohere for AI and former research scientist at Google Brain, told me back then that “It’s almost a topsy-turvy world — in the public discourse, [x-risk] is being treated as if it’s the dominant view of this technology.” She said her main concern is that it “minimizes a lot of conversations around present day risk and in terms of allocation of resources.” In addition, she said, “I wish more of the attention was placed on the current risk of our models that are deployed every day and used by millions of people. Because for me, that’s what a lot of researchers work on day in, day out and it displaces visibility and resources for the efforts of many researchers who work on safety.”
When it comes to Andreessen’s manifesto, Gary Marcus’s post on X (formerly known as Twitter) about Andreessen’s manifesto the other day jumped out at me:
I admire his unbridled optimism, his yearning for markets to be free, his longing for technologies that could be without restraint, his ability to unabashedly cite 56 of his allies and nobody who disagrees, and, Nixon-like, to confidently declare anyone who disagrees with him to be both immoral and an Enemy, and above all else his absolute certainty in his own ideas. But not his lack of respect for data, for alternative perspectives, or for people less wealthy or dogmatic than himself. The very idea that the world might focus on sustainable development goals for the sake of the less fortunate puts him in absolute agony. His statement of beliefs (which includes an assertion that he believes in the scientific method) is declaration, not science, nor reasoned, steel-manned argument.
And I loved the pushback from the folks at TechCrunch, who called out the biases that Andreessen brings to the table — “mainly that he is absurdly wealthy (worth an estimated $1.35 billion as of September 2022) and that his absurd wealth is largely tied to the investments of his namesake tech venture fund. So, he inherently is going to push for his techno-optimist vision, because the success of tech companies means he gets even more rich.”
I will repeat what I said back in June: The future of AI is unknown. That’s the problem with AI prophets and their outsized role today in influencing AI policy. I’d like to hear more from the AI pragmatists — I think they will be more helpful in the short and long-term. But thanks anyway, Marc — I had a good laugh.
The danger is, as I think Gary Marcus pointed out, that this "stupid" AI may still be imposed on society through social engineering and it seems that is indeed the direction that we are heading, considering all the media attention and ridiculously overhyped claims by prominent leaders in the AI field. The social engineering goal is not even kept hidden any more as they've released their Techno-Optimist manifesto (with Marc Andreessen as the author), which is unsurprisingly very similar in structure, style and content to the Communist manifesto by Marx and Engels from 1848, and we know painfully well from history what the political end goals of that manifesto were - the establishment of an authoritarian regime. To hide that similarity to communists, the techno-optimists cry "Thief!" in their manifesto by branding their enemies as communists. Their proclaimed goals of a market and technological development absolutely free from government regulation, which are the first steps in dismantling the state as an institution, have only one logical conclusion as we know again painfully well from history (power does not tolerate a vacuum) - the establishment of a feudal regime.