In the previous articles we’ve tried to cover the broad strokes of what AI is and how it has and may continue to impact the world, but we really haven’t been able to draw any solid conclusions. Well, we’ve managed one conclusion, and have been perhaps heavy handed in repetition of it. Simply put, AI is a tool and any use it’s put to is on the person or persons choosing to do so.
That might make you wonder what this article is going to be about, after all how much can really be said that won’t just repeat the prior articles? The good of AI has been somewhat covered. The bad of AI has been covered, or at least things that risk being bad. Even the what and how of AI has been summarized, simplified, and stylized.
Looking at it that way I suppose it leaves us with the other three of the five question words. Who, When, and Why.
However these questions aren’t really useful applied directly to AI: Who is AI? Nobody, yet. When is AI? There’s a broad range of answers detailing various milestones in its development. Why is AI? That’s the question with some insight, and the answer is like all tools, to help people.
Those questions are more useful asked around AI, let’s ask things like: ‘Who is going to use AI?’ ‘When is AI appropriate to use?’ or ‘Why is AI technology being developed along these paths?’
Providing definitive answers to these questions (and the many others surrounding AI) is unfortunately somewhat beyond us, but this article endeavors to give enough understanding of these and other questions in their context that you’ll be able to evaluate them on a case by case basis.
Let’s start off with the easy one. Who? And from Who? there are two main questions: Who is using AI? as stated above, and who is impacted by that use? Both of these questions are important, and answering the first partially answers the second since you wouldn’t use it if you didn’t think it would have a positive impact. But take a look at the writer’s strike, in part it was prompted based on streaming services and production companies intentions to use AI in place of some or all writers. Thus we answer some of who will use it and some of who would be impacted. The thing that needs to be kept in mind is that these questions need to be asked for every AI program.
It can’t be denied that the number of things AI can be applied to is quite broad, and with that the number of and types of people using it are similarly inflated to encompass almost everyone. As such the potential impact is similarly extended to everyone. Now that’s a bit too broad to really help, but keeping those questions in mind for every AI development is absolutely crucial to have an understanding of the likely immediate effects. If you don’t consider who is going to be affected you can’t understand any of the knock on effects.
Everyone is going to have a reason to use AI.
Moving on we have the question of When? and this is quite a bit harder to determine both good questions to ask and actual answers to those questions. When is it appropriate to use AI? It’s hard to say, because in every case it will be causing effects that need to be considered individually. Another question to consider is When will the technology will be able to perform certain specific feats? Necessary to think about, but nigh impossible to answer. By far the most useful when utilizing AI is the question of When it was trained? because that can strongly impact the results of many prompts.
However the most important question from the When Group is When to stop using the AI? The most likely to be used answer is, “When it stops being profitable.” The better answer is “When it starts negatively impacting people’s livelihoods.”
It’s important to remember that whatever AI is being used for stops being useful when it starts hurting people. Regardless of the function an AI program is serving, if it’s causing real harm to people, economically or otherwise, it needs to stop being used. Not out of any concern for some fictional AI uprising, but for the far more reasonable concern that diverting responsibility on to the program will allow decisions to be made that are morally and/or practically entirely reprehensible without appropriate sanity checking.
Finally we get to the Why questions. These are always the most important, and the most difficult to formulate and answer. Why is someone using an AI? Why would this be what the AI made? Why isn’t something like an AI already being used for this? And most of all: Why do they want to switch to using an AI?
Individually these questions can reveal a lot about the circumstances of any particular plan to utilize AI, and combined they can reveal both the goals and flaws in almost any plan for the utilization of AI. But again our generalizations are somewhat stymied by the nature of the technology, because while we could provide examples for each one of these questions, the specificity with which AI have to be designed prevents us from giving widely applicable answers.
Still we would be remiss if we didn’t at least try to give you some of the more commonly applicable answers to some of these questions. Let’s start with our own answer as to Why we’re using AI generated art for the headers on these articles?
Firstly because it’s appropriate to have the subject featured to some extent in the article, and secondly because we don’t yet have the budget for these articles to buy or create appropriate art or photographs as header images. This sort of reasoning is going to be very common among smaller, financially insecure, and start up companies, or any project where AI is intended to be some kind of focus. Other common reasons for using AI include trying to save money, trying to maximize the number of people that will see the project, and trying to offer infinite value with their product. Which isn’t to say no-one is using AI because it’s the best fit for what they’re intending to do, just that it’s less common than using it for another reason.
The biggest concerns with AI remain the threat it poses to people’s jobs in the arts, and the potential for harmful decision making it facilitates in business. The other issues with it, such as deepfakes, simply aren’t as big an issue once enough people have received enough education on the various ways to evaluate media and determine if it’s likely AI created. Most of the positives of AI are already being quietly adopted in most every industry they’re valuable in.
AI is a revolutionary technology that needs to be harshly questioned in its every implementation until we’ve figured out how to adapt to it. Certainly much of its use can be positive, but there will likely always remain groups that would choose to use it in an unscrupulous manner, either to increase their profits at the cost of societal stability or to push an ideological agenda that’s inconsistent with the facts of reality. Unfortunately this means the only way to be sure of something’s validity, and the practicality of using an AI, is in questioning all parts of it. Hopefully things will progress to the point where it is no longer necessary, but any such time seems likely to be at least a decade or so away.