With the rise of LLM systems (or “AI” as they are annoying called), the term “vibe coding” is all the rage recently. So I wondered, how does vibe coding work with BASIC?
If vibe coding had existed back in the 8-bit computer days, I would be ruling the planet by now. Okay, maybe not the entire planet, but I would have had a nice duchy somewhere. I’m sure that’s true. It has to be.
If vibe coding existed in the middle ages, and had I been alive in the middle ages, then I would definitely be ruling a nice duchy somewhere. Then I'd just have to wait around until Computers are invented so I could finally vibe code my way to galactic dominion!
These examples are sort of proto-vibe-coding. You need to use an agentic system of some kind to really experience the shift that has occured in the past 6 months. Cursor or Claude Code are too good examples. Giving the system the ability have a feedback loop where it can compile and run it self, get the error messages to self correct, write test cases to identify issues, introspect local documentation, do its own "print debugging" for internal state, really changes things. It can choose to google for docs and dynamically do so! It can look up local man pages. I increasingly can choose to save relevant bits of info that enable it to produce better code in the future in local memory, after having learned things through trial and error. Perfect, no. But it seems years more advanced that using the chat interfaces.
I agree, Cursor and Claude code would work better. But getting them set up to work with Atari BASIC would take more effort. Perhaps that is an idea for a part 2.
I took a quick pass at it this evening. I first tried to use the atari800 emulator by loading a script, then manually running it and saving a screenshot - that worked - but I had to manually be involved in the loop. I didn't want to setup computer use - so I tried using the recording/playback to autogenerate recording that could automate running commands. This also didn't work - it turns out the playback does a checksum on the screen state - not just the commands themselves. Finally, I recompiled it in curses mode so that cluade code could call it directly. This didn't work cleanly due to buffering - so I had it autogenerate a python program that used pexpect to mange it and things worked! Claude could now generate basic files, run them, see what worked and get error messages, then try again.
I'm done for now - but I'd probably want to go back to the gui and figure out another way to trigger the screenshots. Or maybe fork the code to have the playback work without the screen-state check - I had claude look for the source and it showed where that could easily be done in input.c.
Language models know little about this language and its dialects. As a result, the AI hallucinated and produced poor results. However, this can be significantly improved with better input data. It is also necessary that these inputs are fed into the language model in a structured format. In my experience, documentation converted to Markdown format helps a lot. However, converting an old scanned PDF into this format is not easy. Also, many working and well-documented example programs are needed in the same language. Once these are available, it’s worth uploading the Markdown files into a custom myGPT under ChatGPT Plus, setting the basic instructions - this already greatly improves the quality of the code.
Besides the problem of results often being wrong and not working, if you are proficient in Google-Fu, you can usually find better examples to teach you what you are trying to learn. Eventually, these tools will improve, but at what cost to the original poster of quality examples and tutorials? If people quit posting because they never get the credit and recognition they deserve, there will be a point where "AI" hits a wall.
If vibe coding had existed back in the 8-bit computer days, I would be ruling the planet by now. Okay, maybe not the entire planet, but I would have had a nice duchy somewhere. I’m sure that’s true. It has to be.
If vibe coding existed in the middle ages, and had I been alive in the middle ages, then I would definitely be ruling a nice duchy somewhere. Then I'd just have to wait around until Computers are invented so I could finally vibe code my way to galactic dominion!
These examples are sort of proto-vibe-coding. You need to use an agentic system of some kind to really experience the shift that has occured in the past 6 months. Cursor or Claude Code are too good examples. Giving the system the ability have a feedback loop where it can compile and run it self, get the error messages to self correct, write test cases to identify issues, introspect local documentation, do its own "print debugging" for internal state, really changes things. It can choose to google for docs and dynamically do so! It can look up local man pages. I increasingly can choose to save relevant bits of info that enable it to produce better code in the future in local memory, after having learned things through trial and error. Perfect, no. But it seems years more advanced that using the chat interfaces.
I agree, Cursor and Claude code would work better. But getting them set up to work with Atari BASIC would take more effort. Perhaps that is an idea for a part 2.
I took a quick pass at it this evening. I first tried to use the atari800 emulator by loading a script, then manually running it and saving a screenshot - that worked - but I had to manually be involved in the loop. I didn't want to setup computer use - so I tried using the recording/playback to autogenerate recording that could automate running commands. This also didn't work - it turns out the playback does a checksum on the screen state - not just the commands themselves. Finally, I recompiled it in curses mode so that cluade code could call it directly. This didn't work cleanly due to buffering - so I had it autogenerate a python program that used pexpect to mange it and things worked! Claude could now generate basic files, run them, see what worked and get error messages, then try again.
I'm done for now - but I'd probably want to go back to the gui and figure out another way to trigger the screenshots. Or maybe fork the code to have the playback work without the screen-state check - I had claude look for the source and it showed where that could easily be done in input.c.
Language models know little about this language and its dialects. As a result, the AI hallucinated and produced poor results. However, this can be significantly improved with better input data. It is also necessary that these inputs are fed into the language model in a structured format. In my experience, documentation converted to Markdown format helps a lot. However, converting an old scanned PDF into this format is not easy. Also, many working and well-documented example programs are needed in the same language. Once these are available, it’s worth uploading the Markdown files into a custom myGPT under ChatGPT Plus, setting the basic instructions - this already greatly improves the quality of the code.
Besides the problem of results often being wrong and not working, if you are proficient in Google-Fu, you can usually find better examples to teach you what you are trying to learn. Eventually, these tools will improve, but at what cost to the original poster of quality examples and tutorials? If people quit posting because they never get the credit and recognition they deserve, there will be a point where "AI" hits a wall.