An AI that can rename your screenshots, organize your receipts, tidy up your notes, and build apps, all while you’re busy with other things or even sleeping? Count me in. From Claude Cowork to Perplexity’s Personal Computer and Manus’ My Computer, there’s been a bumper crop of personal AI assistants that live on your PC and will take charge of your desktop. Aside from the usual integrations with Gmail, Outlook, and Excel, these apps can actually manipulate and edit your files, or execute “shell” commands that give them unprecedented access to your system. Unlike OpenClaw, the viral open-source AI tool that kicked off the whole personal AI agent craze, Claude Cowork and the newer desktop apps from Perplexity and Meta-owned Manus come from big commercial AI players, each with one-click installers (meaning no messing around with GitHub) and sleek user interfaces. All that spit and polish may make you think that these new personal AI assistants are perfectly safe to use. They’re not. Just like OpenClaw, Claude Cowork , Perplexity’s Personal Computer , and Manus’ My Computer are all capable of wreaking havoc on your system if you let them. Give them access to the wrong directory or let them fire off commands without the proper oversight, and you could have a mess on your hands. Don’t get me wrong; Claude Cowork and its competitors are capable of some eye-popping productivity feats when used correctly, and I’ll be getting to their coolest tricks soon. But first, let’s cover some basic safety tips, starting with… Don’t give your AI assistant access to a high-level directory One of the first things Claude Cowork will ask you to do is designate a folder as its “workspace.” Once you choose a folder, your AI assistant will have full access to the files inside, along with any subdirectories plus the files within those subdirectories. Now, when I say full access, I mean the AI can read the files, index them, and use them as context for answering questions. It can even rename, edit, or delete them. Ben Patterson/Foundry All that functionality can lead to some incredibly powerful workflows (renaming and organizing entire directories of screenshots is one of them), but with the wrong prompt, your AI could wipe swaths of files in an instant or get access to sensitive files that you want to keep hidden. So, whatever you do, don’t give Claude Cowork, Perplexity’s Personal Computer, or another AI tool access to, say, your Documents directory. That’s just asking for trouble. Instead, give it access to a smaller folder that’s further down the directory tree–or, even better, grant it access to a fresh, unpopulated directory, and then add files and folders that you’re comfortable with it touching. Don’t let it access sensitive documents It may be tempting to let your personal AI assistant have at it with your bank statements, tax returns, or other sensitive documents, but it’s a bad idea. While some AI assistant tools like Claude Cowork won’t train their models on your data, your file could still be at risk from “prompt-injection” attacks –that is, files with hidden prompts that could trick Claude or another AI into uploading sensitive information to the attacker. For that reason, you should think twice before adding anything with personal identifiers such as social security numbers, bank account numbers, or anything else you don’t want falling into the wrong hands. A great tip I’ve picked up is this: Before granting your AI assistant access to a file, ask yourself whether you’d be comfortable putting the file into a chat app. If the answer’s no, then keep that file out of your AI’s workspace. Do put your AI assistant on a tight leash The same goes for AI coding agents. Most personal AI assistants will ask you what level of oversight you’d like over their activities. On the more cautious end of the spectrum, you may be able to approve every command before your AI executes it–or, on the other end, you can throw caution to the wind, allowing your assistant to perform its commands autonomously while you sleep. Maintaining approval over your AI assistant’s every move is, of course, the safest option, but it’s also the most tedious and you may quickly grow annoyed by having to click “approve” for every action. Still, giving your AI full rein over its actions could be a recipe for disaster if you give it wrong or imprecise instructions. The key is finding a reasonable middle ground–one that allows the AI to act like a true autonomous assistant without simply letting it loose. For example, you could allow Cowork or another AI assistant to perform certain commands such as read-only ones, on its own (“Always Allow”), while keeping potentially destructive commands on a must-approve (“Allow Once”) basis. Do ask for a plan A new and welcome trend in AI coding tools are “planning” modes, where the agent can map out in detail what it’s going to do before it does it. The same thing is possible with personal AI assistants. Instead of ordering them to rename all the files in your workspace and then crossing your fingers, give them a prompt like, “Devise a plan for renaming all the screenshots in my workplace directory; don’t implement the plan yet, but stay in pencil-and-paper mode,” then let the AI detail how it will proceed. Look over the plan carefully and make any needed changes before giving your approval. Do back up your data Even within a sandboxed workspace, it’s possible for Claude Cowork and other personal AI assistants to corrupt or delete your files unintentionally, and while the metadata of your files may be preserved, the actual data may not be. For that reason, it’s critical that you back up any mission-critical files before allowing your AI to manipulate them. If that isn’t feasible, then perhaps you should keep your can’t-lose data out of your AI’s workspace.