5 Comments
User's avatar
Andrew Ayers's avatar

I'd also argue that BASIC is not a simple language; while it's syntax is fairly simple, the kind of programs which can expressed using any particular given dialect of BASIC can be quite complex in nature.

Even using a very simplified form of BASIC, such as TinyBASIC, one can create quite complex and useful applications...provided memory is not too constrained; larger implementations (I would count among those the original Dartmouth BASIC dialect) could be used to express very large and complex systems.

We only tend to think of BASIC as being...well...basic...because many are only familiar with those dialects that were included with most early microcomputers, beginning in the 1970s. Many are not familiar with the more complex versions, because they were typically limited to larger minicomputer installations, or to certain niche areas (ie - various "business BASICs" of the 1980s and early 1990s; Pick BASIC might be considered an "extreme" example).

More recent examples abound, of course - and BASIC is still being used and developed in various ways; people here are probably familiar with things like FreeBASIC and QB64, and VB.net, etc (I should also note that I once found a manual online for an industrial robot system of a familiar brand in that arena which described how to use a dialect of BASIC developed for that system...though use of BASIC for teaching robotics, and for industrial robotics control...isn't as odd as it may seem at first glance).

It's funny, though - I was trying to think of another language that could be called "simplest"...and even Logo doesn't fit that bill, which is also another language that had numerous dialects over the years, yet also has "staying power" (for instance, you can find a form of "turtle graphics" in Python - and within certain dialects of Microsoft BASIC is the "draw" command, which is similar in scope).

Full implementations of Logo, though...could also allow for quite complex expression for applications; it was far more than just "turtle graphics" - but most people only remember that particular use, which I attribute to a small variety of reasons.

Part of all this also relies upon what is meant by the term "simple" when applied to a programming language...I mean, technically, a UTM's language is "simple" (in theory), yet it can express any and all other potential algorithmic capabilities of any other Turing-complete system (given enough time and memory, of course)...

Expand full comment
Andrew Ayers's avatar

This was a very interesting experiment; I do somewhat wonder what you might get with a similar set or series of prompts for the same output, but using some language more common and available as source - perhaps Javascript or Python (especially if you limited output in some manner via prompting to have it output vanilla, non-framework-enhanced code)?

I also must point out that large language models (aka, LLMs) -are- AI, as the term "artificial intelligence" historically and currently encompasses a vast set of varied technologies which fall under that term.

LLMs are, at their core, artificial neural networks (ANNs), which is one of those subset technologies.

The concept of an ANN also stretches back nearly to the beginning of modern digital computing; these models, in one form or another, have long been considered and grouped under the general term of being "AI".

While I could post references to the relevant papers regarding the classic Mcculloch-Pitts neuron model (which slightly predates a paper by Alan Turing on his concept of neural networks; I tend to wonder where and how things would've progressed had he been able to publish his thoughts first), which Rosenblatt's Perceptron machine was based upon, and ultimately it's exposed flaws led to the development and realization of the backpropagation algorithm for learning...all of this history is easy to come by; such posting is therefore unnecessary, imho.

While other arguments could be made about this chain, and/or whether certain steps along it were necessary (only based on hindsight and gained knowledge, mostly), I would still argue that this chain is what helps to define LLMs as being "AI".

I suspect that any annoyance at the terminology stems from popular ignorance of the history of AI, which parallels closely with the development of symbolic computing technology from nearly the beginning of the era, post WW2 (developments in analog computing also played a role; the Perceptron was at heart, a mostly "hand-tuned" physical analog computer, as digital computing technology of the period was not capable of simulating such a system, which underpins how modern ANNs work, as simulations of analog processes).

As an aside, it's also mildly interesting to note that we are where we are with this technology, at least in a general sense, while also potentially being on the brink of another world war; a book-ending of a sort, though while likely purely coincidental, it may also still play a role in pushing humanity once again into such conflict - when ideally it should be doing the exact opposite.

Expand full comment
John Ward's avatar

If vibe coding had existed back in the 8-bit computer days, I would be ruling the planet by now. Okay, maybe not the entire planet, but I would have had a nice duchy somewhere. I’m sure that’s true. It has to be.

Expand full comment
Michael Valverde's avatar

Besides the problem of results often being wrong and not working, if you are proficient in Google-Fu, you can usually find better examples to teach you what you are trying to learn. Eventually, these tools will improve, but at what cost to the original poster of quality examples and tutorials? If people quit posting because they never get the credit and recognition they deserve, there will be a point where "AI" hits a wall.

Expand full comment
Andrew Ayers's avatar

I was going to reply with something "witty" (that I probably would've failed at), but upon reflection, I realized you are most likely correct...with the only addition being "apply this to all of human creative output" (and quite possibly, strike the word "creative").

Currently, right now, I would wager that most software engineers don't get credit or recognition as-is, even if they have an impressive "github resume". From what I have gathered, employers (current or potential, whatever the case may be) don't seem to care about that, nor look at it.

That anecdotal observation aside, I'm also not sure if any of it will matter in the future; there is likely only a few minor "bottlenecks" to overcome by researchers, in order to turn these burgeoning AI systems into "universal" (or more likely, "specialized" - so they can be tailored and sold) do-it-all assistants; an "employee workforce" accessible to only the few who can afford to use them, and direct ("prompt") them properly for the task(s) at hand.

That's worrying enough without the potential specter of AGI looming (which I personally believe that if such a thing does come about, it won't be by design).

Also, even in some idealized world where we all could easily use such systems, and do so freely, without any cost...I suspect that the vast majority of people do not have the need nor want of using such assistants in a manner to provide them income, otherwise they'd likely already be employers themselves. To expect any of them to take that route now (never mind the fact that such systems will likely not be "free") would be naive.

I should also note that currently I am very cynical about the world's state of affairs, due in no small part to my own current personal circumstances, not to mention how I am witnessing, during a time which I should be "looking forward too" (aka, retirement - except, I can't do that, either), the world descending into "collective madness", and quite possibly worse.

This personal "inner state" I do recognize, including how it colors my thoughts and comments in forums such as this; I dislike it, but I cannot escape it...short of becoming homeless and dropping out of society at large.

Which honestly could actually happen...and I'm not sure that it actually wouldn't be better for me personally, in the long run...

Expand full comment