The Meta/Scale AI news was big—but Microsoft’s Copilot security flaw was my story of the week
I was still sniffling and sneezing from my bout with Covid (that I caught at the AI+ Expo in DC last week) when the news broke that Meta CEO Mark Zuckerberg had decided to create an ambitious new “superintelligence” AI research lab headed by Scale AI CEO Alexandr Wang.
It was a surprise move that kept me busy all week, with one take that the move was a bold bid for relevance in its fierce AI battle with OpenAI, Anthropic, and Google — but it is also far from a slam dunk. In addition, I tackled Zuckerberg’s battle for the best AI talent to staff the new superintelligence team. Deedy Das, a VC at Menlo Ventures, told me that he has heard from several people the Meta CEO has tried to recruit. “Zuck had phone calls with potential hires trying to convince them to join with a $2M/yr floor,” he said (a number that one AI researcher told me was “not outrageous at all” and “is likely low in certain sub-areas like LLM pre-training,” though most of the compensation would be in the form of equity).
But I also had to work on an exclusive story I had been offered for Tuesday, about the fact that Microsoft 365 Copilot, the AI tool built into Microsoft Office workplace applications including Word, Excel, Outlook, PowerPoint, and Teams, had harbored a critical security flaw for several months that, according to researchers, signals a broader risk of AI agents being hacked.
Strangely, it was this story that got the most attention this week — vastly surpassing the Meta story. Certainly the headline was spicy: New Microsoft Copilot flaw signals broader risk of AI agents being hacked—‘I would be terrified.’
However, the fact that it was shared by a variety of tech and security blogs shows that this is a real concern for the future of AI agents. The flaw, revealed by AI security startup Aim Security and shared exclusively in advance with Fortune, is the first known “zero-click” attack on an AI agent — at least according to Aim’s claim — that is, an AI that acts autonomously to achieve specific goals. The nature of the vulnerability means that the user doesn’t need to click anything or interact with a message for an attacker to access sensitive information from apps and data sources connected to the AI agent.
In the case of Microsoft 365 Copilot, the vulnerability — which, luckily, never happened in the wild — lets a hacker trigger an attack simply by sending an email to a user, with no phishing or malware needed. Instead, the exploit uses a series of clever techniques to turn the AI assistant against itself.
The researchers at Aim Security dubbed the flaw “EchoLeak.” Microsoft told Fortune that it has already fixed the issue in Microsoft 365 Copilot and that its customers were unaffected.
I was told that the biggest concern is that EchoLeak could apply to other kinds of agents—from Anthropic’s MCP (Model Context Protocol), which connects AI assistants to other applications, to platforms like Salesforce’s Agentforce.
This is one comment from someone I really respect — Rich Harang, a principal security architect at NVIDIA:
And while he didn’t mention my piece,
“The implications of this vulnerability are significant. In an environment where AI tools are increasingly integrated into daily tasks, the potential for exploitation presents a threat not only to individual users but also to organizations at large. Although Microsoft has assured customers that they have implemented necessary patches, the nature of this attack raises questions about the security measures that AI platforms must adopt. With attackers bypassing existing security mechanisms, including cross-prompt injection protections, the need for robust safeguards remains crucial in mitigating risks associated with AI applications.”
Then there was also this comment from what I think is a Decentralized Autonomous Organization that I guess is Blockchain? I don’t really understand it, but ICPanda DAO seems to say the EchoLeak vulnerability shows why we need decentralized, on-chain AI agents — not ones controlled by big centralized platforms like Microsoft:
I’ll need to have a think on that one. 🫡
Meanwhile, I’m fully recovered from Covid, and enjoying some R&R this weekend down on the Jersey Shore in Ocean Grove. I can’t help digging into a little Meta/Scale scoop I’m trying to pin down, but haven’t fully confirmed yet. It’s obviously time to get offline. 👋
I hope you got to Day's during your R&R!
Both interesting stories. Sharing my articles on both of them
From Fair Use to Data Moats: The Shifting Value of Data in AI
https://open.substack.com/pub/pramodhmallipatna/p/from-fair-use-to-data-moats-the-shifting
Securing Autonomous Agents in the Wild
https://open.substack.com/pub/pramodhmallipatna/p/securing-autonomous-ai-agents-in