The Dark Side of AI
"I'm sorry, Dave. I'm afraid I can't do that." - HAL 9000
Artificial Intelligence is all the rage. However, it already has a dark side, and no, we're not talking about Skynet-level annihilation.
Artificial Imagination
Sometimes if the AI doesn't have a direct answer, it makes one up. In 2023 lawyers in New York were sanctioned after citing fake court cases that didn't actually exist. The AI insisted the cases were real when questioned. Even so, citing fake cases happened again this year.
Malware Upon Request
In one recent instance a security researcher at Lasso Security noticed AI had been recommending the same non-existent Python programming package/library. So, he created the package himself, and waited to see who would incorporate the fake package into their software. Thousands of developers, even from companies as large as Alibaba, downloaded it.
While chatbots can be adept at writing code, they have also been found to write creative and effective malware, leading to fears malware authors can jump from "script kiddie" to advanced programming with a few chatbot prompts.
Lack of Security
In 2023 Samsung made headlines because workers had uploaded meeting notes for processing and made code-writing requests to OpenAI's ChatGPT. In the process they gave OpenAI access to specific trade secrets. OpenAI uses submitted questions to further train its AI models.
In March 2024, security researchers disclosed a side channel attack for all AI assistants except Google Gemini. Someone who can see the encrypted traffic flow can infer the topic of 55 percent of responses, often with high word accuracy. The attack results in perfect word accuracy 29 percent of the time. So, someone on the same network that can "see" your encrypted packets, such as public Wi-Fi at a coffee shop, can potentially read into your AI chats and responses.
Job Losses
If AI can generate a meeting transcript, does anyone actually have to take notes and send out a summary? And it's not just basic jobs...if AI really can research legal cases (without faking answers) then why does one need teams of attorneys researching cases?
Environmental Impact
AI needs a lot of computing power, just like search engines and cryptocurrency. Hence, they tend to generate lots of carbon/CO2 output.
Is AI Bad?
It's new, and "new" is not inherently "bad." But there are important privacy and security considerations when using AI chatbots.
April 2024
Send this article to a friend!
Subscribe to The ITS Connection
Related articles