In my previous post, I poked fun at people who don't understand what developers do and want to loudly proclaim that AI has already replaced, or is months away from replacing us. I still think that's completely ridiculous, and honestly it would be amusing if it didn't betray just how little some people know about Software Development.
Now you might have read that post and decided "Old Man Rusty hates AI!". And you'd be wrong. Don't get me wrong I don't love that some Silicon Valley elites hoovered up the worlds data without asking, and then set some power grids alight using it all. I don't love the ethics of these companies. I don't love their attitudes. I don't love just how much money and resources are going into all of this based solely on hype and dreams. I don't love that their answer to all the critiscm is "yeah but in 6 months AGI which will solve all the worlds problems!". Buuuuuuuut. And yes, it's a big buuuuuuuuutttttt. If you want to put all that aside (and mad respect for you if you don't) the annoying thing about it all is that this isn't Crypto 3.0. There is actually some kind of useful tool here for people in my industry. They haven't just invented the worlds slowest, most expensive to run and useless database you can put links to JPEGs on.
So if I don't think it can replace me, and is nowhere near replacing me, is it still useful? In my experience, yes! I use a combination of Codex and Gemini CLI regularly to help me get work done. Before you ask I can't afford Claude, I am but a poor humble small time developer and those two are just cheaper (yes I know, heavily subsidised, but again this post isn't about the negatives). What it's great at might seem a bit confusing at first. It's great at the very obscure, detail orientated very small things. For example, you have an unsymbolicated C++ or Swift crash log. I don't care how much industry experience you have, 95% of us have NFI how to decipher one of those, and the other 5% are liars. We often struggle even just to symbolicate them. LLMs don't. They excel at language tasks, and this dear reader, is another kind of language. They can read it very well. They can interpret it, and they can then tell you what probably caused the crash and where it is. That is, to put it bluntly, insanely useful. What might have taken me hours, is now minutes. Great! Or you want to convert Objective-C code to Swift code, guess what, great at that too because it's another language based task.
Now ironically, it's also very good at the big wishy washy things. You might think that's counter-intuitive but it has so much training data that creating something which looks kind of like what you want isn't hard. Again, it's other great skill is imitation. You show it a picture of an app, it can make something like that very quickly, ignoring the details and rough edges of course. A few months ago we wanted to build a vocoder at Bjango, but we had no experience in doing that. We wanted to know if it was even possible, and if so how. So we prompted some LLMs and then later that day we had a working prototype. Working is very generous here, it didn't actually vocode, but it showed us what we wanted was indeed possible and we just had to work through it. Marc and I gave a presentation about it if you want to learn more. We ended up spending 3 months re-writing everything the AI did, but it was insanely useful to have the way forward illuminated.
The other example is more recent. I know nothing about neural nets and machine learning, but there was a project we are considering doing that needs me to learn this. So again, I turned to my LLM friends who've hoovered up the worlds data about these processes and got them to get me 70% of the way there. They explained about TensorFlow and PyTorch and Epochs and Losses and training data and so much more. I could have found all that on my own, but to have that knowledge to draw on and get me close enough to a working prototype for me to do the rest on my own was invaluable.
It's also useful if, like me, you work alone and you don't have any other humans to do code reviews or talk through problems. You can get an LLM to do that and it imitates a developer well enough to get you to realise where you might have made a mistake or where you might have gone wrong. They are also trained to be insanely upbeat, which while sometimes annoying can be a nice change from dealing with real humans.
It's also useful for what I call "fun stuff". Things that you will only use personally once or twice and you don't care about the code quality. Or things where you want to prompt an LLM into building you a fully working prototype of an idea before you go and replace everything it did with your own code. Move this up. Change this screen. No lets try linking this to this. Ok that's no good give the image a gradient fade. And so on until you get what you want. You'd be surprised how easy production ready code is to write once you have taken the time to actually prototype something and work through all your ideas.
Finally and more soberingly I know some people out there are actually using it to do their job, managing agents, scheduling tasks, performing reviews and getting it to submit code to production for other people to use. To those people I want to say two things. One is I see you, you've found a new toy and you want to see just how fast it goes, I get it. In the opinion of this humble author though, one day you're going to slam that shiny toy right into a tree. Might not be today, might not be tomorrow, but one day it's going to come back to bite you. If you're shipping code at a velocity where you don't understand it and you're not actively working on it don't forget that ultimately you're the one responsible for the outcomes. When someone can't get part of your app to work because an LLM decided JSON was an amazing storage format for indexed, sortable data. When you leak all your customers personal details to the internet. When things break and the bots just can't fix them without breaking more things...well...that's all on you. And that, dear reader, is why the title of this post is what it is. Treat everything that comes from an LLM as if the intern that just started this week did it. Double check it. Triple check. Don't assume anything it told you is true, because that's the actual job you're paid to do.