Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Midjourney is best known as one of the leading AI image generators — with nearly 20 million users on its Discord channel, according to third-party trackers, and presumably more atop that on its website — but its ambitions are beginning to expand.
Following the news in late summer 2024 that it was building its own computing and AI hardware, the company this week released a new research paper alongside machine learning experts at New York University (NYU) on training text-based large language models (LLMs) such as Meta’s open source Llama and Mistral’s eponymous source models to write more creatively.
The collaboration, documented in a new research paper published on AI code community Hugging Face, introduces two new technieques — Diversified Direct Preference Optimization (DDPO) and Diversified Odds Ratio Preference Optimization (DORPO)— designed to expand the range of possible outputs while maintaining coherence and readability.
For a company that is best known for its diffusion AI image generating models, Midjourney’s new approach to rethinking creativity in text-based LLMs shows that it is not limiting its ambitions to visuals, and that, a picture may not actually be worth a thousand words.
Could a Midjourney-native LLM or fine-tuned version of an existing LLM be in the cards from the small, bootstrapped startup? I reached out to Midjourney founder David Holz but have yet to hear back.
Regardless of a first-party Midjourney LLM offering, the implications of its new research go beyond academic exercises and could be used to help fuel a new wave of LLM training among enterprise AI teams, product developers, and content creators looking to improve AI-generated text.
It also shows that despite recent interest and investment among AI model providers in new multimodal and reasoning language models, there’s still a lot of juice left to be squeezed, cognitively and performance-wise, from classic Transformer-based, text-focused LLMs.
The problem: AI-generated writing collapses around homogenous outputs
In domains like fact-based Q&A or coding assistance, LLMs are expected to generate a single best response.
However, creative writing is inherently open-ended, meaning there are many valid responses to a single prompt.
For an example provided by the Midjourney researchers, given a prompt like “Write a story about a dog on the moon”, the LLM could explore multiple diverse paths like:
- An astronaut’s pet dog accidentally left behind after a lunar mission.
- A dog who finds itself in a futuristic canine space colony.
- A stranded dog that befriends an alien species.
Despite this range of possibilities, instruction-tuned LLMs often converge on similar storylines and themes. This happens because:
- Post-training techniques prioritize user preference over originality, reinforcing popular but repetitive responses.
- Instruction tuning often smooths out variation, making models favor “safe” responses over unique ones.
- Existing diversity-promoting techniques (like temperature tuning) operate only at inference time, rather than being baked into the model’s learning process.
This leads to homogenized storytelling, where AI-generated creative writing feels repetitive and lacks surprise or depth.
The solution: modifying post-training methods to prioritize diversity
To overcome these limitations, the researchers introduced DDPO and DORPO, two extensions of existing preference optimization methods. The core innovation in these approaches is the use of deviation—a measure of how much a response differs from others—to guide training.
Here’s how it works:
- During training, the model is given a writing prompt and multiple possible responses.
- Each response is compared to others for the same prompt, and a deviation score is calculated.
- Rare but high-quality responses are weighted more heavily in training, encouraging the model to learn from diverse examples.
By incorporating deviation into Direct Preference Optimization (DPO) and Odds Ratio Preference Optimization (ORPO), the model learns to produce high-quality but more varied responses.
This method ensures that AI-generated stories do not converge on a single predictable structure, but instead explore a wider range of characters, settings, and themes—just as a human writer might.
What Midjourney’s researchers did to achieve this
The study involved training LLMs on creative writing tasks using a dataset from the subreddit r/writingPrompts, a Reddit community where users post prompts and respond with short stories.
The researchers used two base models for their training:
- Meta’s Llama-3.1-8B (an 8-billion-parameter model from the Llama 3 series).
- Mistral-7B-v0.3 (a 7-billion-parameter model from Mistral AI).
Then, they took these models through the following processes:
- Supervised Fine-Tuning (SFT): The models were first fine-tuned using LoRA (Low-Rank Adaptation) to adjust parameters efficiently.
- Preference Optimization:
- DPO and ORPO were used as baselines—these standard methods focus on improving response quality based on user preference signals.
- DDPO and DORPO were then applied, introducing deviation-based weighting to encourage more unique responses.
- Evaluation:
- Automatic evaluation: Measured semantic and stylistic diversity using embedding-based techniques.
- Human evaluation: Judges assessed whether outputs were diverse and engaging compared to GPT-4o and Claude 3.5.
Key Training Findings:
- DDPO significantly outperformed standard DPO in terms of output diversity while maintaining quality.
- Llama-3.1-8B with DDPO achieved the best balance of quality and diversity, producing responses that were more varied than GPT-4o while maintaining coherence.
- When dataset size was reduced, DDPO models still maintained diversity, though they required a certain number of diverse training samples to be fully effective.
Enterprise implications: what does it mean for those using AI to produce creative responses — such as in marketing copywriting, corporate storytelling, and film/TV/video game scripting?
For AI teams managing LLM deployment, enhancing output diversity while maintaining quality is a critical challenge. These findings have significant implications for organizations that rely on AI-generated content in applications such as:
- Conversational AI and chatbots (ensuring varied and engaging responses).
- Content marketing and storytelling tools (preventing repetitive AI-generated copy).
- Game development and narrative design (creating diverse dialogue and branching storylines).
For professionals responsible for fine-tuning and deploying models in an enterprise setting, this research provides:
- A new approach to LLM post-training that enhances creativity without sacrificing quality.
- A practical alternative to inference-time diversity tuning (such as temperature adjustments) by integrating diversity into the learning process itself.
- The potential to develop more engaging AI applications, from AI-assisted writing tools to virtual assistants that can adapt their responses dynamically.
For those handling AI model orchestration and automation, this research highlights:
- The importance of tuning models at the training stage, reducing the need for post-processing adjustments at deployment.
- A way to introduce adaptive storytelling into AI-driven applications, ensuring variability while keeping content quality high.
- A method for making LLM outputs more human-like, which is crucial for applications requiring interactive storytelling, customer engagement, or dynamic content creation.
The future of AI generated creative projects looks bright
The success of DDPO and DORPO demonstrates that training LLMs with diversity-focused objectives can yield significant improvements in creative writing. Some ideas include:
- Integrating deviation-based learning into enterprise AI models to enhance response diversity in customer-facing applications.
- Exploring how these methods apply to other generative tasks, such as AI-powered poetry, screenwriting, or game storytelling.
- Developing hybrid training approaches that balance diversity and instruction-following capabilities for AI assistants.
For those interested in applying these techniques, the researchers plan to make their code publicly available on this GitHub Repository
Whether you are fine-tuning LLMs for business applications or optimizing large-scale AI orchestration, this study provides actionable insights into how models can be more dynamic, engaging, and responsive to creative tasks.
By adopting these techniques, AI teams can move beyond rigid, formulaic outputs—building AI systems that are not only smart but also truly imaginative.
READ SOURCE