|
| The Agent on Your Desktop You open your laptop, point your AI at a folder, and type "organize this." You go get coffee. By the time you're back, the files are sorted, renamed, and the duplicates are gone. That's Claude Cowork, and it's the most useful thing you could install this year. It's also the product I get a lot of questions about, and those questions deserve honest answers. This week's newsletter walks through Cowork end to end: what it does, the security architecture Anthropic already built in, and the user-side controls that matter most. This tool is truly impressive, yet getting the most out of it means understanding its strengths, and its weaknesses. | Claude Cowork became generally available on April 9, and is included in all paid Claude plans. Cowork gives Claude direct access to folders you choose and runs multistep tasks on your behalf: organizing files, drafting documents, analyzing data, even automating browser workflows if paired with the Claude Chrome extension. That capability set is the point. People using Cowork are speeding up file handling, first-pass drafting, and research synthesis compared to others still copy-pasting in and out of a chat window. Key Insight: Cowork is powerful but carries real risk. Use a dedicated working folder (not Documents, Desktop, or any folder with financial or personal records). Review Cowork's task details before approving any step that uses content you didn't create. (See Prompt Injection article below.) |
When you use Cowork, or other agentic AI applications, the LLM never actually touches your filesystem. Every file read, every command, and every API call passes through a separate layer of software called a harness. The harness controls what the AI does, and it also controls what the AI sees, which is why it is a critical security layer in agentic AI. So much so that the competitive advantage in agentic AI has shifted from the model to the scaffolding around the model. Why It Matters: When you evaluate AI tools, ask concrete questions: which tools the agent has by default, where execution happens, how permissions are managed, and how isolation is handled. If the answers are vague, the harness is vague. Cross that tool off your list. Read the Article → You ask a chatbot for some web research; you get a response from the AI. You ask for some clarification and the chatbot sends your first prompt + the web research + your new prompt to the AI for its next response. In this process, the AI cannot tell the difference between your prompts and the content it retrieved from the web search. This is the foundation of prompt injection. (See the .Video section below.)
It is this pattern that attackers exploit with prompt injection. Cowork's harness is built for exactly this. It scans tool outputs for injection attempts before the model sees them. It also gates file operations behind permission prompts and runs execution in a sandbox so a hijacked action has somewhere less dangerous to land. Strategic Takeaway: For AI agents, the riskiest files are the ones you point it at: PDFs someone emailed you, scraped web pages, vendor spec sheets, third-party API responses. Assume every untrusted input could carry malicious instructions and review them appropriately. See the Report → | Quick Hits.Foundations A Hands-On Walkthrough of Cowork This Cowork tutorial is a great primer. It covers the basics, then three practical examples: organizing a messy Downloads folder, converting files in batches, generating a PDF report from app data. Also covers connectors, Chrome integration, and where Cowork still falls short. | .Video How Prompt Injection Actually Works A clear explanation of prompt injection, walking through a real example where an AI agent bought the wrong book because of instructions smuggled into a web page. Explains why browser-based agents can be misled and what controls actually matter. | .Deep Dive Under the Hood of an Agentic Harness A detailed explanation of why the harness, not the model, determines whether an AI agent is enterprise-ready. Covers the translator role between raw model output and real business workflows, and why governance, security, and persistence all live there. |
|
| Industry DevelopmentsGoogle Builds a Cowork Rival Google is testing an Agent tab inside Gemini Enterprise with a UI that mirrors Cowork's structure: goal, connected apps, files, and a "Require human review" toggle. A release date hasn't been announced. Google I/O is the most likely venue. Competition in the agentic desktop category is heating up. | Anthropic Hardens Cowork for Enterprise Anthropic rolled out enterprise controls for Cowork this month: centralized admin policies, data governance, audit logs, and expanded security review on connectors and MCP servers. If your company has been waiting for IT sign-off before piloting Cowork, it's a good time to move forward. |
|
| |
|