You’ve probably been hearing a lot about “AI” recently. It’s been a big topic on the news, between “AI” art and programs like ChatGPT there have been a lot of discussions and concerns about what it’s going to be, or already has been, used for. Allow me to dispense with the mysteries of “AI” and provide a decent overview.
First we need to cover what it isn’t. It isn’t anything similar to sci-fi. None of these “AI” are alike to those featured in the works of Asimov or other authors. They aren’t sapient, sentient, or even really intelligent. Above all they aren’t able to replace people’s jobs yet.
What they are then are complex sets of code meant to produce something that mimics the data that’s been fed into it. An elaborate mimic following tens of thousands of carefully managed rules and steps to produce something almost new and unique. To put it simply, it’s copying enough things at the same time it looks original.
Is it worth using, worrying about, or otherwise concerning yourself with in general? Since it’s just another tool the better question is, is this the right tool for the job?
The pitfalls when working with AI, or in environments that it might be used.
Each of AI’s potential applications has unique issues ranging from the technical to practical, and even into the legality of it.
Let me break that down a bit:
Starting with “AI” artistic offerings, we can examine what one does, and why that can be a serious problem. The process begins with two sets of input; the training data, and the request.
- The request is what I typed into Bot Draw A – “Cat Sundae” for example.
- The reference data for this request is every image the bot had been given that was tagged with either “cat” or “sundae” and depending on the specific “AI” model it may even include similar or related words.
Particularly advanced programs may even have images that were divided for clarity, showing exactly which parts are ‘Cat’ or ‘Sundae.’ The “AI” then takes these images, usually in the range of thousands of them, and creates new images out of them until its matching algorithm gives it a few with a high enough confidence (sometimes a value you can set yourself) that it will show the images to you. Usually this takes around 5-15 seconds and results in something technically correct, yet utterly unlike your intent.
Technically correct, yet – utterly unlike our intent.
Now would you look at that, four images that are almost what was being aimed for. You can tell pretty easily that none of them quite work. The bottom left is probably my favorite, yet also looks the least like a ‘Cat Sundae.’ I’m also guessing none of these are what anyone was expecting from that prompt.
“Cat Sundae” by CLS using Imgur.com’s Bot Draw A
This is largely because the algorithms used to create the graphic lack any sort of intuition or aesthetic context. That means you need to be highly specific in your requests to get images matching your intent. The higher end of these “AI” art creation tools have the ability to refine a generated image to make it easier to be as specific as necessary. For now any art made with “AI” assistance will be just that, made with an “AI” assistance.
There’s still more problems to be found hiding behind the “AI” creations, and this is where we get into the legal situation. Every “AI” has training data, and that data typically belongs to someone. If your “AI” generated image came from an “AI” using copyrighted training data, does the generated image belong to the holder of that copyright? Does no-one own the copyright to the generated image? Or does it belong to the creator of the “AI’s” code?
These questions aren’t entirely settled law yet, as far as I’m aware anyway, but there are pieces that have been. If the copyrighted image that used the ”AI’s” training data wasn’t properly sourced (permission from the copyright holder) there is potential legal trouble for using it to generate an image. Currently there’s precedent that training “AI” on copyrighted data is legal here in the USA, provided it meets the criteria for fair use. Generally it’s fine to use “AI” for anything non-profit (or educational, such as this article) but as soon as you use “AI” instead of an artist it starts getting questionable.
Of course that isn’t the end of the possible issues with the use of “AI” to generate things. In text it becomes a lot more obvious that “AI” generation is effectively stealing anyone and everyone’s notes on the matter. When you first read most “AI” generated text it’s usually entirely parsable, and you won’t have many, if any, complaints about the structure. The problem comes when you actually try to make sense of it. In many cases it simply won’t be wholly cogent, either referencing impossibilities or including tangentially related but otherwise nonsensical things or actions. An “AI” written recipe may ask you to flambé the sushi, or roast some ice-cream. Similarly, attempting to get an “AI” to write, say, an article on “AI” could result in any number of peculiar turns of phrase being used incorrectly.
One of the biggest issues though, is that these “AI” know and can use your location information when they’re not hosted locally. Admittedly it isn’t anything new that a website or service would use your IP address to track your approximate location. The actual important thing here is that the “AI” can and will say that it doesn’t. To be clear, I’m specifically avoiding the word “lie” in this context because the “AI” is incapable of being blamed for this. The one that would be lying is whoever made the “AI” and it’s important to remember that.
These “AI” are just tools, for now. One day we might have true A.I. – real digital sapients – to share the world with, culpable of their own actions and capable of being our companions into the future. Currently however we must blame their creators for any bot perfidy or misuse of provided data.