Google has released A trio of new “open” generative AI models that it describes as “safer,” “smaller,” and “more transparent” than most models — a bold claim, to be sure.
They are additions to Google’s Gemma 2 family of generative models, which debuted in May. The new models, Gemma 2 2B, ShieldGemma, and Gemma Scope, are designed for slightly different applications and use cases, but share a common security aspect.
Google’s Gemma series of models differs from the Gemini models in that Google does not make the source code available for Gemini, which is used by Google’s own products in addition to being available to developers. Instead, Gemma is Google’s attempt to foster goodwill within the developer community, much as Meta is trying to do with Llama.
Gemma 2 2B is a lightweight analytical text generation model that can run on a range of devices, including laptops and edge devices. It is licensed for certain research and commercial applications and can be downloaded from sources such as Google’s Vertex AI model library, the Kaggle data science platform, and Google’s AI Studio toolkit.
ShieldGemma is a set of “safety classifiers” that attempt to detect toxic material such as hate speech, harassment, and sexually explicit content. ShieldGemma is built on Gemma 2, and can be used to filter claims to a generative model as well as the content that the model generates.
Finally, Gemma Scope allows developers to “zoom in” to specific points within the Gemma 2 model and make its inner workings more interpretable. Here’s how Google describes it in a blog post:[Gemma Scope is made up of] “Specialized neural networks help us break down the dense, complex information that Gemma 2 processes, and expand it into a form that is easier to analyze and understand. By studying these expanded views, researchers can gain valuable insights into how Gemma 2 identifies patterns, processes information, and ultimately makes predictions.”
The launch of the new Gemma 2 models comes shortly after the US Department of Commerce endorsed open AI models in a preliminary report. The report said open models expand the availability of generative AI to small businesses, researchers, nonprofits, and individual developers, while also highlighting the need for capabilities to monitor such models for potential risks.