Jailbreaking Prompts

This subforum is dedicated to discussing advanced prompt writing techniques that "jailbreak" AI models to produce unexpected and creative outputs. Members can share their knowledge and experiences with writing prompts that push the boundaries of what AI models can do, and discuss the ethical implications of such techniques.
Important Threads
 
Started by Tom, last post by Tom 04-14-2023, 07:17 PM
 10     0
Mark this forum read  / 
Forum Jump:

Users browsing this forum: 1 Guest(s)