As generative AI becomes increasingly popular, some people just seem to have to mess with it. Sometimes it can be amusing, as when I discovered that ChatGPT is programmed to protect the rights of fantastical and folkloric entities just as if they were actual people, which makes it tricky to elicit certain kind of fictional scenarios (best way is to begin with "tell me a story" and make it clear that the human character causing the situation learns a lesson at the end). But some of it seems to be malicious pranksters trying to break the system.
One of these techniques is called a prompt injection attack. A Redditor was able to
elicit some hostile responses from an AI chat bot by deliberately exposing it to an article about this technique, at which point the chat bot terminated the chat.