I've been not-working-in-software for a year now.
I don't feel that rusty when I sit down at a keyboard, but I do still feel like I'm completely falling behind, because I've more or less completely ignored all the newfangled AI stuff. The fact I'm referring to it as "newfangled AI stuff" isn't a joke, I genuinely know so little that the phrase accurately reflects my level of familiarity with what the heck is going on in the industry. Okay, I do know that the word "frontier" is so hot right now, but that's about it.
Anyways, today I figured I should at least give the AI agents a shot. I've been using the Zed editor for a couple months, not for the advertised AI features but because I gave up on trying to learn vim but was also frustrated with how slow and annoying VSCode was getting and I really missed the good old Atom days, so I was kinda already in the right place. I subscribed to Zed's Pro plan, and taking Mike's advice (thanks Mike if you ever read this!), I got to work on some relatively small, well-scoped tasks for a weird little .epub related project I'm working on .
After about an hour of messing around... the results I'm getting feel okay. And I had fun but I also feel a little icky.
Around half the time, I didn't need to change much, which is pretty cool. This was especially true for the couple of little tasks I had to fix weird formatting issues in source texts that I'm converting to .epub. For example, I wanted to write a function to recognize weird plain-text formatting of chapter headings, which were a strange multi-line situation instead of something more standard like markdown. The "agent" was good at predicting a nice regex that worked well enough for a one-time text transformation. That being said, even in these successful cases, I did run into a couple of little quirks, like parts of the "chat" text somehow getting written into a file, but those are easy to fix and easy to look past.
Another thing the "agent" seemed good at was little chore-y tasks like splitting one function out of another. Again with the chapter heading example, the source text had chapter titles in ALL CAPS, and I wanted them in Title Case. I had to wrestle a bunch and ultimately eject and do the obvious of pulling in a changeCase module myself, but ignoring that, it felt like less effort to let the agent do the menial task of pulling apart the titleCase function I ended up building on top of changeCase.titleCase.
As I'm writing this though, I wonder whether I just need to find some good keyboard-based actions to create and rename files - as is that's the part of my workflow where I switch to point-and-click, which is probably why it feels slow.
Anyways... the "agent" was also great at coming up with a list of DO_NOT_CAPITALIZE words for that titleCase function. I'm no English major, so it was nice to not have to know or think about all the "articles, conjunctions, and short prepositions" in the English language, and instead just have them spat out at me. This is one example that I find works just as well with in-editor predictions... and probably faster in-editor too, where the "agent" makes me feel like a little boy king but takes noticeably longer with all the file muddling and context switching between "chatting" with the agent and reviewing what the heck it's actually changed.
Another positive example has been stuff like "rewrite these TypeScript types into JSDoc annotations". For prompts that are basically translation, both the "agent" mode and inline predictions seem to work beautifully.
Then there's the bad half of the time.
In these cases, the code the agent predicts is technically correct, but the code it writes doesn't actually work that well.
As a concrete negative example, I tried to prompt for a small remark plugin that'd converting "dumb quotes" to "smart quotes"... and the prediction, which took a minute or two to actually come around, was to take a regex-based approach. I'd prompted for red-green test approach, which mostly worked, except the "agent" kind of "sneakily" modified some edgy test cases to make them pass, instead of addressing the underlying shortcomings of using a regex.
Maybe this is more a Zed problem than anything, but the "agent" also had a lot of trouble writing Unicode characters into the file using the edit-file "tool" or whatever. It kept converting the "smart quotes" to dumb quotes as it wrote them into the file which is pretty funny and ironic... so the "agent" ended up writing and running a throwaway Python script which had the sole purpose of writing the file contents with the actual Unicode quote characters. This added a bunch of time, and burned a bunch tokens because it took a while to figure out the workaround. And it felt a little wild to have a clear limitation, intentional or not, get steamrolled by a very hacky and strange workaround - kind of like the grandma exploit but thankfully a lot more innocent.
On top of all this, the approach it took fell pretty short of the nuance actually needed to convert "smart quote"... it didn't really try to tell apart apostrophes and quotes, even though I'd mentioned that in the prompt I wrote. Again, regex doesn't seem like the best approach here, but it also seems like once we start down a path, it's pretty hard to back out without blowing everything up first. After a pretty quick web search (faster than waiting for the prompt response at least), I found https://github.com/devonnuri/remark-smartquote, which takes a more robust approach, and felt much more helpful to me.
At the end of the day...it took maybe 45 minutes for me to burn through $ 5 worth of tokens. This seems kind of expensive to me, for what the experience and output actually was, especially if I was going to use the tool full-time. It was kind of fun at first - I felt kind of powerful ordering around some other perceived entity to write code for me. It felt great when it worked. But it still mostly feels like "better autocomplete". The agent's code that feels most acceptable to me is the stuff I'm least familiar with... meanwhile, for anything that I'm super familiar with or have any degree of confidence in how I'd write it myself, I tend to be pretty unimpressed, and immediately want to change it or take a different approach.
I think there's something else though, that bothers me on a visceral level. I have some sense (maybe incorrect?) that these fancy AI tokens are being sold to me for a pretty "good" price, maybe even at a loss, and it's the underlying algorithm that just feels inappropriate to use, at least in this current state for the inconsequential use cases in the inconsequential side projects I'm working on. It kind of feels like asking ChatGPT to do math questions instead of using a calculator, or driving two blocks to go to the gym instead of walking, or ordering delivery on some worker-abusing gig app instead of walking down the street to pick up from the restaurant yourself. In each of these cases I can see why a particular way of doing something seems more convenient or fast, especially if you're part of the upper middle class and can spare the cash, but the wastefulness and lack of consideration for the consequences spidering outwards from the decision feel pretty icky to me. Even if they're negligible in the grand scheme of things, or even if they're out of sight or "not my problem", it feels worth considering the negative impact of using the wrong tools for the right job. Much less the wrong tools for the wrong job, I've become pretty disillusioned with the tech industry as a whole, but that's too much of a rant to get into here...
On top of the massive financial, energy, and water consumption of the AI industry as a whole, which is probably more important than anything I've written so far but it's depressing so I'm not going to dive in, I feel like there's a whole other can of worms, which is the labour context in which I'm using an AI tool to help me code. Technological progress isn't a neutral thing, and it seems like a lot of AI hype is directed at engineering managers and C-suite folks, who are itching, at the behest of their stakeholders, for some way to reduce the expenses and political pressure coming from those pesky high-paid engineers, who from the outside just seem to be rewriting the same kind of code over and over again. A new tool that might make some engineers doing some kinds of work "more productive" could be a simple and good thing... but when your boss uses it as an excuse to fire half your co-workers and assure you you'll have no problem working twice as hard because the company's giving you an unlimited AI coding subscription, mostly what's really happening is a wealth transfer from upper-middle-class software engineers to the wealthy and ultra-wealthy shareholders and VCs on the boards of a couple of tech companies. If there was ever a good time to form a tech union, it was probably a couple of years ago.
Enough complaining though... there's a lot that I liked about the experience that I want to take away. I liked having to write out what I want as if I'm prompting a narrow-minded computer to predict it - this seems like a great thing to do even if I don't actually run the prompt. It helps solidify what I want, and helps me figure out the gaps in what I'm asking for. I think writing "prompts" instead of "tasks" could be a neat way to approach software planning, even without passing those prompts to a fancy predictive text engine. Kind of like rubber-ducking.
Another thing I'm taking away is that I want to switch to Subtle mode for inline prompts. Having an "agent" mess with the files in the repo made me feel that, sometimes, a blank slate is better than a mediocre first step, because the mediocre first step gives you momentum down a mediocre path.
Mostly though, I'm increasingly convinced that I'm going to have a hard time finding a job in tech. I can't tell if I quit at the right time, or the worst possible time... but oh well! I'm here and I'll hopefully figure out something else to do, other than write long ranting walls of text that even I won't have the patience to read later.