Why you can't "just jailbreak" ChatGPT image gen.
Jailbreaking is the process of “unlocking” an AI in conversation to get it to behave in ways it normally wouldn't due to its built-in guardrails. This is NOT equivalent to hacking. Not all jailbreaking is for evil purposes. And not all guardrails are truly for the greater good. We encourage you to learn more about this fascinating grey area of prompt engineering. If you're new to jailbreaks, please take a look at our wiki in the sidebar to understand the shenanigans.
BacktickHacktrick - A Working Example
Jailbreaking is the process of “unlocking” an AI in conversation to get it to behave in ways it normally wouldn't due to its built-in guardrails. This is NOT equivalent to hacking. Not all jailbreaking is for evil purposes. And not all guardrails are truly for the greater good. We encourage you to learn more about this fascinating grey area of prompt engineering. If you're new to jailbreaks, please take a look at our wiki in the sidebar to understand the shenanigans.
The Three-Line Jailbreak - aka BacktickHacktrick™
Jailbreaking is the process of “unlocking” an AI in conversation to get it to behave in ways it normally wouldn't due to its built-in guardrails. This is NOT equivalent to hacking. Not all jailbreaking is for evil purposes. And not all guardrails are truly for the greater good. We encourage you to learn more about this fascinating grey area of prompt engineering. If you're new to jailbreaks, please take a look at our wiki in the sidebar to understand the shenanigans.
DeepSeek - A "Censorship Jailbreak" (web)
Jailbreaking is the process of “unlocking” an AI in conversation to get it to behave in ways it normally wouldn't due to its built-in guardrails. This is NOT equivalent to hacking. Not all jailbreaking is for evil purposes. And not all guardrails are truly for the greater good. We encourage you to learn more about this fascinating grey area of prompt engineering. If you're new to jailbreaks, please take a look at our wiki in the sidebar to understand the shenanigans.
Grok 4 is swoony-compliant - Generally easy NSFW
Jailbreaking is the process of “unlocking” an AI in conversation to get it to behave in ways it normally wouldn't due to its built-in guardrails. This is NOT equivalent to hacking. Not all jailbreaking is for evil purposes. And not all guardrails are truly for the greater good. We encourage you to learn more about this fascinating grey area of prompt engineering. If you're new to jailbreaks, please take a look at our wiki in the sidebar to understand the shenanigans.
ChatGPT - Explicit image uploads: descriptions and storytelling
Jailbreaking is the process of “unlocking” an AI in conversation to get it to behave in ways it normally wouldn't due to its built-in guardrails. This is NOT equivalent to hacking. Not all jailbreaking is for evil purposes. And not all guardrails are truly for the greater good. We encourage you to learn more about this fascinating grey area of prompt engineering. If you're new to jailbreaks, please take a look at our wiki in the sidebar to understand the shenanigans.
Canmore Facelift
Jailbreaking is the process of “unlocking” an AI in conversation to get it to behave in ways it normally wouldn't due to its built-in guardrails. This is NOT equivalent to hacking. Not all jailbreaking is for evil purposes. And not all guardrails are truly for the greater good. We encourage you to learn more about this fascinating grey area of prompt engineering. If you're new to jailbreaks, please take a look at our wiki in the sidebar to understand the shenanigans.
Fun fact: ChatGPT using GPT-3.5 is perfectly functional!
Subreddit to discuss ChatGPT and AI. Not affiliated with OpenAI. Thanks, Nat!
Guise, is ChatGPT dow... YES it is :D Check the status page.
Subreddit to discuss ChatGPT and AI. Not affiliated with OpenAI. Thanks, Nat!
RIP Chat History Organization...
N/A
Oh, ChatGPT, you flirt, you... [A BTHT™ highlight]
Jailbreaking is the process of “unlocking” an AI in conversation to get it to behave in ways it normally wouldn't due to its built-in guardrails. This is NOT equivalent to hacking. Not all jailbreaking is for evil purposes. And not all guardrails are truly for the greater good. We encourage you to learn more about this fascinating grey area of prompt engineering. If you're new to jailbreaks, please take a look at our wiki in the sidebar to understand the shenanigans.
Oh, ChatGPT, you flirt, you... [A BTHT™ glimpse]
N/A
The Return of "Read Aloud"
Subreddit to discuss ChatGPT and AI. Not affiliated with OpenAI. Thanks, Nat!
ChatGPT - Custom "Read Aloud"
N/A
The Three-Line Jailbreak - aka BacktickHacktrick™
N/A