Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.
Generative AI frameworks and models are one of the hottest trends of 2022, as new approaches have come to market that enable users and organizations to generate images and text.
Among the organizations building generative AI technologies is Stability AI, which raised $101 million in funding in October. Stability AI develops open-source foundation models, including the popular Stable Diffusion model. Stable Diffusion enables anyone to generate creative images simply by inputting a text prompt describing the desired image. Building a generative AI model like Stable Diffusion requires a significant amount of computing power both for training and for inference.
At the AWS re:Invent conference this week in Las Vegas, Stability AI formally announced that it had chosen AWS as its preferred cloud platform for building generative AI tools. As it turns out, Stability AI isn’t a stranger to AWS and has already been using the cloud platform.
“Last week, we released Stable Diffusion 2.0, developed at Stability AI, which is another step forward to clean our dataset, with better quality, less bias and faster [speeds],” Emad Mostaque, founder and CEO of Stability AI, said during a session at the re:Invent 2022 conference. “We built this all on AWS.”
Intelligent Security Summit
Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.
The cloud and generative AI go hand in hand
Stability AI isn’t the only generative AI vendor that relies on the public cloud to help build foundation models.
OpenAI, the organization behind the GPT-3 large language model for text and DALL-E for image generation, already relies on the public cloud. However, rather than using AWS, OpenAI has largely relied on Microsoft Azure to help build and deliver its capabilities.
OpenAI’s reliance on Microsoft Azure isn’t just about technology. There is also a financial incentive. In 2019, Microsoft invested $1 billion into OpenAI to help develop AI technologies on Azure.
Not to be outdone, Google is also using its cloud for generative AI efforts, including its own Imagen text-to-image initiative.
Stable Diffusion 2.0 uses the cloud to generate AI images faster
Building Stable Diffusion is an exercise that involves multiple steps and components. At the most basic level, it’s about data.
Mostaque said that Stable Diffusion started with 100,000 GB of images and labels, and was able to compress it down to just 2 GB of data for the AI model.
Stable Diffusion 2.0 offers more control of how images are generated at higher levels of detail. And with the 2.0 version, Stable Diffusion has gotten faster. With the initial release, Mostaque said it took approximately 5.6 seconds to generate an image. Today it takes only 0.9 seconds. He said the technology is set to get even faster as it heads toward real-time generation of high-resolution images.
Using AWS SageMaker to build generative AI
Stability AI is working with the AWS SageMaker suite of tools now to continue building and improving Stable Diffusion and other foundation models.
Mostaque said that GPT Neo X, which comes from the EleutherAI community that Stability supports, is a popular language model foundation. With SageMaker, Stability is running training on it across 1,000 Nvidia a100 GPUs to make the model run faster.
“Scaling our infrastructure is incredibly hard and making these models available is incredibly hard,” Mostaque said. “We think that with SageMaker and the broader Amazon suite we can bring this technology to everyone, to [not only] create … one model for someone but create models all around the world and make it accessible.”
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.