I thought AI would replace Photoshop. Here’s why I still do it myself

When ChatGPT first debuted, I thought my days as a writer were numbered. There are so many things it can do , and I imagine artists have had similar pangs of fearful panic as generative AI keeps getting ever better at creating lifelike and/or stylized images. But I’m still here! Still getting paid to put the right words in the right order on the digital page. And even within my small world of image manipulation, I still use Photoshop every single day. Generative AI is neat, but it isn’t perfect. In fact, it’s so far from perfect that I rarely use it. It’s only good for very specific needs, and even then I still have to do some manual touch-ups. I’m no Photoshop wizard, but even my rudimentary skills surpass what AI can do—most of the time. Where generative AI images shine As I’m not a Photoshop expert, my skills are definitely limited. There are things I just can’t do because I don’t know how to do them, and AI is nice for those bits. AI is also nice for quick little tasks where I don’t care for perfect results, like prototypes and ideations. Generative AI is fantastic for quickly whipping up concept art and creating fun digital props for a roleplaying game . I’ve used it to create phony sci-fi tablet overlays for sending pretend messages to my tabletop RPG players, and Photoshop’s own Generative Fill feature is a quick and dirty healing brush/clone tool replacement that makes it way easier to fill in any gaps in an image or just make it a little bigger. Creating an image of my tabletop RPG characters stylized like it’s on a terminal screen? That’s one task for which I don’t mind using AI. Jon Martindale / Foundry I’ve used generative AI to create prototype card layouts for a game I’m designing, for quick personal memes between friends, and for portraits representing the characters I want to roleplay as. But for me? That’s where generative AI’s usefulness ends. I don’t use it to create sprawling vistas or gigantic works of art. Why would I? Sure, it might be impressive from a technical standpoint that AI tools can create those things out of thin air. But I don’t really have any use for that. At their core, large language model AIs just aren’t capable of understanding anything meaningfully. Even when I do need generative AI, the lack of accuracy, precision, verisimilitude, and ability to follow specific instructions kills its usefulness. It still mostly feels like a tech demo, and that makes it largely ineffective for anything beyond novelty. The glaring weaknesses of generative AI One time, I was making a character portraint for one of my players in an upcoming Alien RPG tabletop game. I wanted a sci-fi guy in a jumpsuit with corporate vibes and to have his fingers in “W” and “Y” shapes to represent his loyalty to Weyland-Yutani Corporation (his employer). I struggled a lot with that one. I mean, I just wanted the character to have his fingers splayed in the right way. But could AI do it? Oh boy, could it not. No matter how hard I tried to finagle it, the results sucked. I tried upwards of 10 different prompts to get it to understand that I wanted three fingers up on one hand and two on the other, splayed to create the impressions of “W” and “Y” letters. Sometimes it made the hands face the wrong way. It never got the number of fingers right, and it never splayed them in the right way. It utterly failed. After trying—and failing—to get the AI generating what I wanted for close to 20 minutes, I gave up and just made it myself using the first generated image as a reference. I cloned one of the fingers, moved into the right spot, adjusted the lighting, blended the layers, and it came out great. All of that took five minutes. Okay, fine. You might say that I built upon the original creation put forth by the AI. Yes, I’m glad it gave me that initial design to work with, and it was easier to edit that than create the same thing from scratch. But as a final product? It failed to do what I needed. It was no more usable than a random image I could’ve grabbed from somewhere online. More often than not, it’s just quicker for me to make edits manually. With AI, I have to think about how to prompt the AI correctly and making sure the AI will interpret my instructions the exact way I intend to get what I need from it—not something else, not something that gets it right in one area but randomly changes something in another area, not the right image but with an overhauled art style. With the latest crop of LLMs, doing this proves frustratingly and opaquely difficult. And that’s my main problem with today’s generative AI: it takes so much time to fix what it creates that I might as well have done it myself in the first place. Just look at the infamous Coca-Cola Christmas ad that came out atrocious yet cost far more to create than if they’d just paid some animators and artists—and they ended up doing that anyway because the AI results sucked and needed to be punched up. Now, if you’re an individual and you don’t know the first thing about Photoshop, then AI can be a tool to get you halfway there . But we’re not yet at the point where manual adjustments, punch-ups, and handcrafted art—even by amateurs like myself—are obsolete. Faster, cheaper, more sustainable Indeed, there are a million quick little things I still use Photoshop for where it only takes me a few seconds to do. It doesn’t make sense to use AI for these things, even if AI is capable of them: resizing images, adjusting lighting, tweaking contrast, reframing an image, converting to a different file type, changing aspect ratios, etc. I can still resize an image faster and more accurately with Photoshop than any LLM. Jon Martindale / Foundry These are all important tasks I do every single day, and there’s no way I’m going to 1) trust an unreliable AI to do what I need done correctly, or 2) waste all that GPU time, electricity, and water on something I can do faster and more effectively with existing software. (Yes, the environmental costs of generative AI are frighteningly expensive.) Look, I’m not anti-AI. It’s not like I hope the technology dies, and it’s not like I can’t see how it might be useful. But it’s important to be mindful of what we’re using and how we’re using it, and I think it’s unnecessary to use generative AI for anything I can do myself, especially if I can do it better, faster, and with less frustration. The best of both worlds? There’s probably an end game sometime in the next decade or two where generative AI will be able to do what I do well enough… and I might end up losing my niche altogether. Some even think that’ll happen to everyone and we’ll have to contend with a post-work world. But I don’t think LLMs are what’s going to make that happen. For now, AI tools can help . They’re useful, but they have limitations. I use AI for fun, novelty, and unimportant stuff. I’ll spin up some images and videos to send my friends, or inspiration for our next roleplaying session, or a quick concept for a creative project. It’s invaluable for placeholder art in a work-in-progress game design, for example. But when it comes to anything critical, anything that demands specificity, anything that could land me in hot water if it contains mistakes, anything I can already do myself in no time? I’ll just do it myself. No way I’m going to entrust any of that to an unreliable AI that won’t listen to instructions and will instead inject a bunch of its own hallucinations. If AI is going to make my life harder, then I’ll just do it myself.