- Joined
- Jan 17, 2013
- Messages
- 8,582
- Reaction score
- 8,525
- Location
- The Seattle suburbs
- Website
- www.reneedominick.com
"Chatbots are so gullible they'll take direction from hackers." (WaPo gift link)
Go figure.
(edited to fix formatting)
Imagine a chatbot is applying for a job as your personal assistant.
The pros: This chatbot is powered by a cutting-edge large language model. It can write your emails, search your files, summarize websites and converse with you.
The con: It will take orders from absolutely anyone.
AI chatbots are good at many things, but they struggle to tell the difference between legitimate commands from their users and manipulative commands from outsiders. It’s an AI Achilles’ heel, cybersecurity researchers say, and it’s a matter of time before attackers take advantage of it.
“The problem with [large language] models is that fundamentally they are incredibly gullible,” said Simon Willison, a software programmer who co-created the widely used Django web framework. Willison has been documenting his and other programmers’ warnings about and experiments with prompt injection.
“These models would believe anything anyone tells them,” he said. “They don’t have a good mechanism for considering the source of information.”
Go figure.
(edited to fix formatting)
Last edited: