Editor’s Note: This article contains mention of suicide and sexual harassment.
These days, anyone on the internet has completely unrestricted access to artificial intelligence. Conversational bots like ChatGPT have become commonplace on smart phones and in classrooms. AI virtual assistants and creative tools can be found on almost every major digital platform imaginable. Amongst all this abundance of access, one can’t help but wonder why it’s necessary.
With AI’s advanced creative processing abilities, it’s easy to accept the good that could emerge from its development and integration into everyday society. AI has exhibited the budding ability to make revolutionary strides in scientific research and practical applications for professional fields like medical care and engineering.
Despite these objectively positive possibilities AI has a self-developing nature that makes its capabilities characteristically impossible to fully understand and control. As enticing as its boundless potential may appear, one should not be so quick to accept the casual use of AI into their life with open arms.
Widespread, unlimited access to AI has already proven through multiple sets of circumstances to be dangerous far beyond the threat it poses to creative integrity. AI has the potential for negative influence on mental health, human rights and public information on a concerning scale. A tool with such uncharted potential is not something that should be freely accessible without reasonable regulations.
On Feb. 28, 2024, a 14-year-old boy named Sewell Setzer III took his own life within minutes of corresponding with an AI chatbot. An AP News article detailing the lawsuit his mother filed after the incident reads Setzer had been texting the bot for months on a platform designed and marketed to interact with users in sexual and romantic contexts.
Setzer had reportedly developed an unhealthy emotional bond with his virtual friend and had previously confessed suicidal thoughts to the bot on multiple occasions. While the final messages Setzer exchanged with the bot do not explicitly encourage him to take his own life, the previous texts wherein his suicidal thoughts were explicitly detailed went unmonitored and were not staunchly discouraged as they should have been.
Arguably designed to appeal to underaged users, the app provided a deceivingly realistic fantasy world where emotional relationships could be constructed entirely on algorithms. The connection Setzer formed with the artificial personality facilitated an emotional investment that isolated him from his peers and family and indulged his already distressed mental state.
The tragic case of Setzer is an example of the dangerous impact unregulated AI can have on an individual basis, especially on children with underdeveloped reasoning skills and fragile mental health. Had there been restrictions preventing underaged users from accessing the chatbot app or in-app features intended to report and discourage suicidal behavior, things may have turned out differently for Setzer and his family.
There are many ways AI can negatively affect mental health on a more collective level as well, like in the broader phenomenon of its infringement on women’s rights.
Online gender-based harassment has experienced a dramatic rise in recent years with the improvement in deepfake technology. AI deepfakes have become so convincing that some content can no longer be discerned as real or fake based on visual judgment alone.
One study found over 98% of deepfake content was used for explicit sexual purposes with 99% of victims targeted by such content being women.
AI-generated explicit material is not only a threat to its victims’ public images but a direct attack on their identities and individual autonomy. Given the relative infancy of deepfake technology, legal measures specifically established to deal with such deplorable acts of sexual harassment are still developing. Only 27 states in the U.S. have officially passed bills prohibiting the proliferation of explicit deepfakes into law.
There is absolutely no excuse for allowing this kind of flagrant violation of human rights to continue and a resolution must come in the form of both restrictions on AI content generation and sweeping laws that ensure offenders of such restrictions are appropriately prosecuted.
Deceiving AI content also has the much less direct, but all the more insidious, power to spread online disinformation, which can have a tremendous impact on political involvement and public misinformation. In the case of the U.S. especially, it can prove to be a dangerously effective political tool, exhibited by the use of AI campaign strategies in the months leading up to the 2024 presidential election.
Robocalls with manipulated candidate voice recordings claiming false identities, AI-generated Republican National Committee advertisements projecting images of an apocalyptic U.S. under a second Biden term and a fabricated Trump endorsement from Taylor Swift are all aspects of the Trump campaign’s re-election strategy that blatantly abused AI to garner political support.
This kind of intentional spread of misinformation is immoral and unacceptable. While individual people are responsible for their own political education and media literacy, it is irrational to let such misleading material continue to litter public media sources without mandatory disclaimers informing users about content origins.
There is no possible way to confidently grasp the full extent of AI’s capabilities, and yet it becomes increasingly institutionalized with each passing day. One would think it only natural to use a young, rapidly developing invention of such significance thoughtfully and cautiously, but it seems society has entered an era where thought isn’t as valued as efficiency.
AI has much to offer, but its drawbacks have already proven to be as devastating as its advantages are beneficial. Formally restricting AI access is the only way to responsibly engage with it. It isn’t a matter of censorship; it’s a matter of collective safety.