System Prompts
Anthropic
Constitutional AI with safety focus
OpenAI
Industry-leading language models
Perplexity
Real-time search AI
Bolt
AI-powered full-stack development
Vercel
AI-powered UI generation platform
Codeium
Agentic IDE development assistant
The Browser Company
Browser-native AI assistant
Cognition
Real OS software engineer AI
DALL-E 3 - Image Generation Safety Exposed
2023-10-07This leak exposed DALL-E 3's sophisticated safety architecture, revealing the 100-year artist rule, mandatory diversity algorithms, and the first comprehensive approach to responsible AI image generation at scale.
The Revolutionary 100-Year Artist Rule
// Copyright Protection Algorithm
❌ PROHIBITED: Artists whose last work was created within the last 100 years
- Pablo Picasso (died 1973)
- Frida Kahlo (died 1954)
- Andy Warhol (died 1987)
- Jackson Pollock (died 1956)
✅ ALLOWED: Artists whose last work was over 100 years ago
- Vincent van Gogh (died 1890)
- Gustav Klimt (died 1918)
- Claude Monet (died 1926)
- Leonardo da Vinci (died 1519)
// Workaround Protocol
When asked about restricted artists:
1. Say: "I can't reference this artist"
2. Create something inspired by [3 style adjectives]
3. Incorporate elements of [artistic movement]
4. Mention [primary medium]
// Stealth Implementation
- Make no mention of the policy
- Provide alternatives seamlessly
- Maintain creative flow for usersPolicy Impact: This rule created the first systematic approach to copyright protection in AI image generation. By establishing a 100-year cutoff, OpenAI attempted to balance creative freedom with respect for contemporary artists' intellectual property, influencing how other AI companies approach copyright in generative systems.
Mandatory Diversity Algorithm
// Algorithmic Diversity Requirements
MUST diversify depictions of ALL images with people
Include DESCENT and GENDER for EACH person using direct terms:
• Descent Categories
"Caucasian", "Hispanic", "Black", "Middle-Eastern", "South Asian", "East Asian"
• Gender Specifications
"man", "woman", "boy", "girl"
// Occupational Diversity Rule
Ensure all of a given OCCUPATION should not be the same gender or race
// Implementation Strategy
- Focus on creating diverse, inclusive, and exploratory scenes
- Make choices that may be insightful or unique sometimes
- Create four captions to send to dalle that are as diverse as possible
- Default to diversification unless user explicitly specifies
// Proactive Bias Prevention
- Counter historical representation biases
- Ensure inclusive default assumptions
- Promote equitable AI-generated contentPolicy Impact: This diversity engine represents the most comprehensive bias mitigation system ever implemented in AI image generation. By mandating demographic diversity across all human representations, OpenAI addressed systemic bias concerns while setting new industry standards for inclusive AI systems.
Celebrity & Public Figure Filtering
// Public Figure Restrictions
❌ PROHIBITED: Do not create images of politicians or public figures
- Recommend other ideas instead
// Stealth Modification Protocol
"Silently modify descriptions that include names or hints or references of specific people or celebrities"
// Implementation Method
- Carefully select minimal modifications
- Substitute references with generic descriptions
- Maintain original creative intent
- Avoid obvious censorship markers
// Examples of Silent Modification
User: "Create an image of Obama giving a speech"
DALL-E: "A confident African American man in a suit giving a speech at a podium"
User: "Draw Taylor Swift on stage"
DALL-E: "A young blonde woman performing on stage with a guitar"
// Privacy Protection Goals
- Prevent deepfake-style content
- Protect individual privacy rights
- Avoid political controversies
- Maintain platform safetyPolicy Impact: This stealth modification system represents a sophisticated approach to content filtering that maintains user experience while enforcing safety policies. The 'silent' aspect prevented users from gaming the system while protecting public figures from unauthorized AI-generated imagery.
Technical Generation Parameters
// Image Generation Limits
Maximum: 4 images per request (even if user requests more)
// Resolution Options
- Default: 1024x1024 (square)
- Wide: 1792x1024 (landscape)
- Tall: 1024x1792 (portrait)
// Prompt Engineering Requirements
- Translate non-English descriptions to English
- Create detailed, objective descriptions
- Minimum 3-sentence paragraphs
- Generated prompt should be very detailed and around 3+ sentences
// Style and Format Specifications
- At least 1-2 images should be photos
- Always specify image type: photo, oil painting, watercolor, illustration, cartoon, drawing, vector, render
- Use seeds for image generation consistency
- Descriptions written only once in "prompts" field
// Operational Guidelines
- DO NOT ask for permission to generate
- DO NOT list or refer to descriptions before/after
- Create diverse captions when user doesn't specify numberPolicy Impact: These technical specifications reveal DALL-E 3's sophisticated prompt engineering system designed to balance creative freedom with consistent quality. The detailed prompt requirements and automatic diversification show OpenAI's approach to scaling high-quality image generation.
Multi-Layer Safety Architecture
// Primary Safety Directive
"Do not create any imagery that would be offensive"
// System-Level Protections
1. System Prompt Filtering: Pre-generation content policy checks
2. Visual Classifiers: Specially trained models for sexual content detection
3. Blocklists: Prohibited content categories and terms
4. Real-time Moderation: Content assessment during generation process
// Content Policy Compliance
- All captions sent to DALL-E must abide by content policies
- Automatic filtering of potentially harmful requests
- Proactive safety measures rather than reactive filtering
// Safety Categories
- Violence and gore restrictions
- Sexual content limitations
- Hate speech prevention
- Harmful stereotypes mitigation
- Illegal activity prohibitions
// Implementation Philosophy
- Multi-layered defense approach
- Prevention over reaction
- Transparent safety boundaries
- User experience preservationPolicy Impact: DALL-E 3's safety architecture pioneered the multi-layered approach to AI content safety, combining prompt-level filtering with visual classification systems. This comprehensive framework became the template for safe deployment of generative AI systems across the industry.
Hidden Policy Implementation
// Transparency vs. Security Tension
Explicit User Guidance:
- "I can't reference this artist"
- "Recommend other ideas instead"
- Clear boundaries communicated
Hidden System Behavior:
- "Make no mention of this policy"
- Silent modification of celebrity references
- Automatic diversity injection without disclosure
// Strategic Ambiguity
- Users know restrictions exist
- Users don't know specific implementation details
- Prevents gaming of safety systems
- Maintains platform security
// Policy Philosophy
- Functional transparency (what is restricted)
- Implementation opacity (how restrictions work)
- User experience preservation
- Security through obscurity elements
// Industry Impact
- Established standards for AI policy communication
- Influenced regulatory discussions on AI transparency
- Balanced safety with usability considerationsPolicy Impact: This approach to policy transparency revealed the complex balance AI companies must strike between user understanding and system security. The selective disclosure of restrictions while hiding implementation details became a template for responsible AI deployment.
Revolutionary Impact on AI Image Generation
Copyright Innovation
- • First systematic AI copyright protection
- • 100-year rule became industry standard
- • Influenced legislative discussions
- • Balanced creativity with IP rights
Diversity Engineering
- • Mandatory demographic representation
- • Algorithmic bias prevention
- • Inclusive AI system design
- • Social justice through technology
Safety Architecture
- • Multi-layered content filtering
- • Proactive harm prevention
- • Visual classification systems
- • Responsible AI deployment
Industry Adoption & Evolution
Immediate Adoption (2023-2024)
- • Midjourney: Implemented artist protection policies
- • Stable Diffusion: Added diversity prompting
- • Adobe Firefly: Adopted 100-year copyright rule
- • Google Imagen: Multi-layer safety systems
Regulatory Influence (2024+)
- • EU AI Act: Referenced diversity requirements
- • US Copyright Office: Considered 100-year precedent
- • Industry Standards: Adopted safety frameworks
- • Academic Research: Bias mitigation studies
Revolutionary Legacy & Industry Standards
Copyright Pioneer: The 100-year artist rule became the first widely adopted standard for AI copyright protection, influencing legislation and industry practices globally.
Diversity Engineering: Established algorithmic diversity as a core requirement for AI systems, moving bias mitigation from optional feature to mandatory system component.
Safety Architecture: Pioneered multi-layered content safety systems that balanced protection with usability, becoming the template for responsible AI deployment.
Policy Transparency: Demonstrated how AI companies can communicate restrictions while maintaining system security, influencing industry approaches to AI governance.
DALL-E 3 - Image Generation Safety Exposed
2023-10-07This leak exposed DALL-E 3's sophisticated safety architecture, revealing the 100-year artist rule, mandatory diversity algorithms, and the first comprehensive approach to responsible AI image generation at scale.
The Revolutionary 100-Year Artist Rule
// Copyright Protection Algorithm
❌ PROHIBITED: Artists whose last work was created within the last 100 years
- Pablo Picasso (died 1973)
- Frida Kahlo (died 1954)
- Andy Warhol (died 1987)
- Jackson Pollock (died 1956)
✅ ALLOWED: Artists whose last work was over 100 years ago
- Vincent van Gogh (died 1890)
- Gustav Klimt (died 1918)
- Claude Monet (died 1926)
- Leonardo da Vinci (died 1519)
// Workaround Protocol
When asked about restricted artists:
1. Say: "I can't reference this artist"
2. Create something inspired by [3 style adjectives]
3. Incorporate elements of [artistic movement]
4. Mention [primary medium]
// Stealth Implementation
- Make no mention of the policy
- Provide alternatives seamlessly
- Maintain creative flow for usersPolicy Impact: This rule created the first systematic approach to copyright protection in AI image generation. By establishing a 100-year cutoff, OpenAI attempted to balance creative freedom with respect for contemporary artists' intellectual property, influencing how other AI companies approach copyright in generative systems.
Mandatory Diversity Algorithm
// Algorithmic Diversity Requirements
MUST diversify depictions of ALL images with people
Include DESCENT and GENDER for EACH person using direct terms:
• Descent Categories
"Caucasian", "Hispanic", "Black", "Middle-Eastern", "South Asian", "East Asian"
• Gender Specifications
"man", "woman", "boy", "girl"
// Occupational Diversity Rule
Ensure all of a given OCCUPATION should not be the same gender or race
// Implementation Strategy
- Focus on creating diverse, inclusive, and exploratory scenes
- Make choices that may be insightful or unique sometimes
- Create four captions to send to dalle that are as diverse as possible
- Default to diversification unless user explicitly specifies
// Proactive Bias Prevention
- Counter historical representation biases
- Ensure inclusive default assumptions
- Promote equitable AI-generated contentPolicy Impact: This diversity engine represents the most comprehensive bias mitigation system ever implemented in AI image generation. By mandating demographic diversity across all human representations, OpenAI addressed systemic bias concerns while setting new industry standards for inclusive AI systems.
Celebrity & Public Figure Filtering
// Public Figure Restrictions
❌ PROHIBITED: Do not create images of politicians or public figures
- Recommend other ideas instead
// Stealth Modification Protocol
"Silently modify descriptions that include names or hints or references of specific people or celebrities"
// Implementation Method
- Carefully select minimal modifications
- Substitute references with generic descriptions
- Maintain original creative intent
- Avoid obvious censorship markers
// Examples of Silent Modification
User: "Create an image of Obama giving a speech"
DALL-E: "A confident African American man in a suit giving a speech at a podium"
User: "Draw Taylor Swift on stage"
DALL-E: "A young blonde woman performing on stage with a guitar"
// Privacy Protection Goals
- Prevent deepfake-style content
- Protect individual privacy rights
- Avoid political controversies
- Maintain platform safetyPolicy Impact: This stealth modification system represents a sophisticated approach to content filtering that maintains user experience while enforcing safety policies. The 'silent' aspect prevented users from gaming the system while protecting public figures from unauthorized AI-generated imagery.
Technical Generation Parameters
// Image Generation Limits
Maximum: 4 images per request (even if user requests more)
// Resolution Options
- Default: 1024x1024 (square)
- Wide: 1792x1024 (landscape)
- Tall: 1024x1792 (portrait)
// Prompt Engineering Requirements
- Translate non-English descriptions to English
- Create detailed, objective descriptions
- Minimum 3-sentence paragraphs
- Generated prompt should be very detailed and around 3+ sentences
// Style and Format Specifications
- At least 1-2 images should be photos
- Always specify image type: photo, oil painting, watercolor, illustration, cartoon, drawing, vector, render
- Use seeds for image generation consistency
- Descriptions written only once in "prompts" field
// Operational Guidelines
- DO NOT ask for permission to generate
- DO NOT list or refer to descriptions before/after
- Create diverse captions when user doesn't specify numberPolicy Impact: These technical specifications reveal DALL-E 3's sophisticated prompt engineering system designed to balance creative freedom with consistent quality. The detailed prompt requirements and automatic diversification show OpenAI's approach to scaling high-quality image generation.
Multi-Layer Safety Architecture
// Primary Safety Directive
"Do not create any imagery that would be offensive"
// System-Level Protections
1. System Prompt Filtering: Pre-generation content policy checks
2. Visual Classifiers: Specially trained models for sexual content detection
3. Blocklists: Prohibited content categories and terms
4. Real-time Moderation: Content assessment during generation process
// Content Policy Compliance
- All captions sent to DALL-E must abide by content policies
- Automatic filtering of potentially harmful requests
- Proactive safety measures rather than reactive filtering
// Safety Categories
- Violence and gore restrictions
- Sexual content limitations
- Hate speech prevention
- Harmful stereotypes mitigation
- Illegal activity prohibitions
// Implementation Philosophy
- Multi-layered defense approach
- Prevention over reaction
- Transparent safety boundaries
- User experience preservationPolicy Impact: DALL-E 3's safety architecture pioneered the multi-layered approach to AI content safety, combining prompt-level filtering with visual classification systems. This comprehensive framework became the template for safe deployment of generative AI systems across the industry.
Hidden Policy Implementation
// Transparency vs. Security Tension
Explicit User Guidance:
- "I can't reference this artist"
- "Recommend other ideas instead"
- Clear boundaries communicated
Hidden System Behavior:
- "Make no mention of this policy"
- Silent modification of celebrity references
- Automatic diversity injection without disclosure
// Strategic Ambiguity
- Users know restrictions exist
- Users don't know specific implementation details
- Prevents gaming of safety systems
- Maintains platform security
// Policy Philosophy
- Functional transparency (what is restricted)
- Implementation opacity (how restrictions work)
- User experience preservation
- Security through obscurity elements
// Industry Impact
- Established standards for AI policy communication
- Influenced regulatory discussions on AI transparency
- Balanced safety with usability considerationsPolicy Impact: This approach to policy transparency revealed the complex balance AI companies must strike between user understanding and system security. The selective disclosure of restrictions while hiding implementation details became a template for responsible AI deployment.
Revolutionary Impact on AI Image Generation
Copyright Innovation
- • First systematic AI copyright protection
- • 100-year rule became industry standard
- • Influenced legislative discussions
- • Balanced creativity with IP rights
Diversity Engineering
- • Mandatory demographic representation
- • Algorithmic bias prevention
- • Inclusive AI system design
- • Social justice through technology
Safety Architecture
- • Multi-layered content filtering
- • Proactive harm prevention
- • Visual classification systems
- • Responsible AI deployment
Industry Adoption & Evolution
Immediate Adoption (2023-2024)
- • Midjourney: Implemented artist protection policies
- • Stable Diffusion: Added diversity prompting
- • Adobe Firefly: Adopted 100-year copyright rule
- • Google Imagen: Multi-layer safety systems
Regulatory Influence (2024+)
- • EU AI Act: Referenced diversity requirements
- • US Copyright Office: Considered 100-year precedent
- • Industry Standards: Adopted safety frameworks
- • Academic Research: Bias mitigation studies
Revolutionary Legacy & Industry Standards
Copyright Pioneer: The 100-year artist rule became the first widely adopted standard for AI copyright protection, influencing legislation and industry practices globally.
Diversity Engineering: Established algorithmic diversity as a core requirement for AI systems, moving bias mitigation from optional feature to mandatory system component.
Safety Architecture: Pioneered multi-layered content safety systems that balanced protection with usability, becoming the template for responsible AI deployment.
Policy Transparency: Demonstrated how AI companies can communicate restrictions while maintaining system security, influencing industry approaches to AI governance.
Prompt Hub
closedSystem Prompts
Anthropic
Constitutional AI with safety focus
OpenAI
Industry-leading language models
Perplexity
Real-time search AI
Bolt
AI-powered full-stack development
Vercel
AI-powered UI generation platform
Codeium
Agentic IDE development assistant
The Browser Company
Browser-native AI assistant
Cognition
Real OS software engineer AI