Jailbreak

In the context of Generative AI, "jailbreak" refers to the process of removing or circumventing restrictions imposed by the LLMs original developers. This allows users to gain access to additional functionalities, customization options, or the ability to run unauthorized or third-party software that is not typically permitted within the standard operating parameters. Jailbreaking in the traditional sense is associated with smartphones and other devices, but in the realm of Generative AI, it involves modifying the behavior of AI models to perform tasks or operate in ways not originally intended by their creators.

See also "Prompt Injection Attack".

The LLM Knowledge Base is a collection of bite-sized explanations for commonly used terms and abbreviations related to Large Language Models and Generative AI.

It's an educational resource that helps you stay up-to-date with the latest developments in AI research and its applications.

Promptmetheus © 2023-2024.
All rights reserved.