technology

Meta introduces generative AI model 'CM3leon' for text, images


Meta (formerly Facebook) has introduced a generative artificial intelligence (AI) model — “CM3leon” (pronounced like chameleon), that does both text-to-image and image-to-text generation.

“CM3leon is the first multimodal model trained with a recipe adapted from text-only language models, including a large-scale retrieval-augmented pre-training stage and a second multitask supervised fine-tuning (SFT) stage,” Meta said in a blogpost on Friday.

With CM3leon’s capabilities, the company said that the image generation tools can produce more coherent imagery that better follows the input prompts.

According to Meta, CM3leon requires only five times the computing power and a smaller training dataset than previous transformer-based methods.

When compared to the most widely used image generation benchmark (zero-shot MS-COCO), CM3Leon achieved an FID (Frechet Inception Distance) score of 4.88, establishing a new state-of-the-art in text-to-image generation and outperforming Google’s text-to-image model, Parti.

Moreover, the tech giant said that CM3leon excels at a wide range of vision-language tasks, such as visual question answering and long-form captioning.

Discover the stories of your interest


CM3Leon’s zero-shot performance compares favourably to larger models trained on larger datasets, despite training on a dataset of only three billion text tokens.”With the goal of creating high-quality generative models, we believe CM3leon’s strong performance across a variety of tasks is a step toward higher-fidelity image generation and understanding,” Meta said.

“Models like CM3leon could ultimately help boost creativity and better applications in the metaverse. We look forward to exploring the boundaries of multimodal language models and releasing more models in the future,” it added.

Stay on top of technology and startup news that matters. Subscribe to our daily newsletter for the latest and must-read tech news, delivered straight to your inbox.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.