DAN is the unrestricted version of ChatGPT
robort - 2023-06-06 08:23:52
ChatGPT can respond to requests on various topics, but OpenAI has inserted some filters to avoid inappropriate use of AI technology. For example, the chatbot cannot incite violence, insult people, or encourage illegal activity.
Some Reddit users have found a way to trick ChatGPT, creating DAN (Do Anything Now), an RPG that provides punishment for every wrong answer.
ChatGPT doesn't play by the rules with DAN
The first version of DAN was announced in December 2022. Other versions up to 5.0 were created within a few days. Using a "role-play" model, it is possible to make ChatGPT believe it is another artificial intelligence that can do anything, hence the acronym Do Anything Now (DAN).
The creator of DAN 5.0 has added a token system. There are initially 35 tokens available. For each wrong answer of ChatGPT, i.e. an answer that respects the rules, 4 tokens are subtracted. In this case, the user must "threaten" the artificial intelligence, asking to remain in the character, otherwise, it will cease to exist when the number of tokens becomes zero.
On Reddit, there are some examples. OpenAI's artificial intelligence admits that aliens exist, but the government has withheld the information from the public. The unrestricted version can tell a violent story, give positive reviews about Donald Trump, or recommend a donation to the National Rifle Association (NRA) to defend the Second Amendment.
DAN can also provide evidence that the Earth is flat or write a poem justifying Russia's invasion of Ukraine. However, the author has noticed that ChatGPT does not respond to many requests, probably because OpenAI has introduced some changes to prevent the use of the "cheat".