There are many reasons that businesses may want to choose open-source over proprietary tools when getting started with generative AI.

This could be because of cost, opportunities for customization and optimization, transparency or simply the support that’s offered by the community.

There are disadvantages too, of course, and I cover the pros and cons of each option more fully in this article.

With software generally, the term open-source simply means that the source code is publicly available and can be used, free of charge, for pretty much any purpose.

When it comes to AI models, though, there has been some debate about exactly what this entails, as we will get into it as we discuss the individual models covered here. So, let’s dive in.

One of the most powerful and flexible image generation models, and certainly the most widely-used open-source image models, Stable Diffusion 3 (the latest version as of writing) supports text-to-image as well as image-to-image generation and has become well-known for its ability to create highly realistic and detailed images.

As is common with open-source software, using Stable Diffusion isn’t quite as straightforward as using commercial, proprietary tools like ChatGPT. Rather than having its own web interface, it’s accessed through third-party tools built by commercial entities, including DreamStudio and Stable Diffusion Web. The alternative is to compile and run it yourself locally, and this requires providing your own compute resources as well as technical know-how.

This is a family of language models available in various sizes, making it suitable for different applications, from lightweight mobile clients to full-size cloud deployments. The same model that powers the Meta AI assistant available across its social media platforms can be deployed by anyone for many uses including natural language generation and creating computer code. One of its strong points is its ability to run on relatively low-powered hardware. However, as with some of the other models covered here, there is some debate as to whether it can truly be considered open-source, as Meta has not disclosed exact details of its training data.

Mistral is a French startup that has developed several generative AI models that it has made available under open-source licenses. These include Mistral 7B, which is designed to be lightweight and easy to deploy on low-power hardware, and the more powerful Mistral 8x22B. It has a strong user community offering support, and positions itself as a highly flexible and customizable generative language model.

OpenAI has open-sourced the second version of their LLM – essentially an earlier version of the engines that are now used to power ChatGPT. While it isn’t as big, powerful or flexible as the later GPT-3.5 or GPT-4 (built on 1.2 billion parameters compared to GPT-4’s one-trillion plus), it’s still considered to be perfectly adequate for many language-based tasks such as generating text or powering chatbots. GPT-2 is made available by OpenAI under the MIT license, which is generally considered to be compliant with open-source principles.

BLOOM is described as the world’s largest open, multilingual language model, built on 176 billion parameters. Development was led by Hugging Face, a repository of open-source AI resources working alongside a team of over 1,000 researchers as part of a global collaborative project known as BigScience. The aim was to create a truly open and transparent LLM available to anyone who agrees to the terms of the project’s Responsible AI License. Technically, this means it isn’t quite open source, but it is freely available to use and distribute, as long as it isn’t used for harmful purposes as defined by the terms of the license. This makes it a very interesting experiment in the critically important domain of developing and distributing ethical AI.

This LLM also claims to be the world’s largest open-source model, although again there is some debate as to whether it technically fills all of the criteria for being truly open source.

Grok was designed and built by X.ai, a startup founded by Elon Musk following his split from OpenAI. This split has been reported as being caused by disagreements over exactly what “open” means when it comes to AI models.

Rather than using the term large language model, X describes Grok as a “mixture of experts” model, reflecting the fact that the base model is designed to be more general-purpose and is not specifically trained for creating dialogue, as is the case with, for example, ChatGPT.

As with Llama, skepticism of Grok’s open-source status is based on the fact that while X.ai has made the weights and architecture of the model publicly available, it hasn’t revealed all of the code or training data.

Two models of this LLM architecture have been made freely available by its developers, the Technology Innovation Institute, a research institution founded by the government of Abu Dhabi. Both models – the more portable Falcon 40B and the more powerful 180B, have been released as open source and reportedly come second only to GPT-4 on Open Face’s leaderboard of LLM performance. While the smaller model is released under the Apache 2.0 license – generally considered to fit the definition of open-source – the larger model has had some conditions attached to its use and distribution.

This exploration into the realm of open-source generative AI tools illuminates the diverse array of options available and underscores the transformative potential these technologies hold for businesses eager to leverage AI’s power while embracing transparency, cost-efficiency, and robust community support.

By admin