Turns out our previous article was timed perfectly. Geoffrey Hinton decided to quit Google and his work on AI mere days after we finished writing a brief piece on the topic. Now we don’t know much about Dr.Hinton aside from what’s already being reported elsewhere, but it is clear that he’s quit because of specific concerns about how AI is and can be used.
In our previous article about AI we talked about it in a pretty general way, went over the basics of it as a generative engine for content, and took a fairly neutral stance on it overall. This is because it’s hard to say as of yet if the technology is going to be more damaging than helpful. Unfortunately it’s currently unquestionably leaning to the side of being damaging.
The reasons for this stem from the core conceit of what AI can currently accomplish. The technology simply imitates the data it’s given, which can make it easy for bad actors to control the input in a way to present falsehoods as fact in a nearly undetectable, or at least extremely convincing, way. Another potential risk is that an AI being offered as a free public utility actually being a cover for data harvesting in an even more sinister way than is currently already being done through metadata tracking.
The concerns that Dr.Hinton has expressed are mostly to do with the effects AI might have on the ability to determine if something is true and the transformative effects these AI will have on society with their increasing ability to handle more and more jobs. Or at least these are the concerns that have been most reported on by other news outlets. Perhaps there are more concerns that simply aren’t being covered as well.
Examples of uses which might cause these kinds of effects include ‘deepfakes,’ AI generated art reducing the demand for human artists, and algorithms for making business decisions automatically.
In either case these are definitely legitimate concerns, and Dr.Hinton is in one of the best positions to know for sure how much of a concern they really are, since he has doctorates in both psychology and computer science. Our own ability to evaluate these concerns is fairly limited, having not worked with and on AI for nearly as long as Dr.Hinton. However there are a few points we would like to raise.
AI remains a tool. Every risk in AI is one based on how it’s going to be, or has been, used. There is no part of AI which is innately risky, harmful, or detrimental to the wellbeing of society or individuals. Ideally this would mean working towards AI that cannot be used in harmful ways by such bad actors, but that’s likely infeasible. What can actually be done to combat these risks then? Education.
That’s right, education is the answer. Most AI falsified images, videos, audio, and texts are fairly easy to find out. Some AI written school assignments have already been submitted and found out when the text in question had sentences referring to the writer as an AI or were found to have wholly plagiarized sections. It usually doesn’t take much examination to find the flaws in many of these AI generated works, for images you can look at the fingers or other joints, for video there’s often quality or framerate discrepancies, with audio there tends to be repeats of background noise on specific syllables, and text we’ve already covered.
But that doesn’t really sound like education, that’s mostly just common sense stuff. The real education is needed for the higher quality fakes. Recently an image of the pope in a particular type of coat made the rounds, it was AI generated, but quite difficult to tell for sure from the image alone. That’s where the education is needed, not in knowing how to look at the content, but how to look at the context. For that image as an example, one of the early signs to look for is a lack of related photos. There isn’t only going to be one person taking photos of the pope at that time, and there certainly are going to be more photos taken before and after that one. This is the education that’s needed to avoid AI generated misinformation, an education in identifying and examining the missing context.
Stay tuned for the next article in our AI series covering the positives of AI.