What is Gemma?
Gemma is a family of lightweight, state-of-the-art open models built from the same research and technology used to create the Gemini models. It provides a range of models optimized for specific use cases, tailored to your needs.
Features of Gemma
- Incorporating comprehensive safety measures, Gemma models help ensure responsible and trustworthy AI solutions through curated datasets and rigorous tuning.
- Gemma models achieve exceptional benchmark results at various sizes, even outperforming some larger open models.
- With Keras 3.0, Gemma models enjoy seamless compatibility with JAX, TensorFlow, and PyTorch, empowering you to effortlessly choose and switch frameworks depending on your task.
How to use Gemma
- Get started with Gemma 2, which optimizes for blazing-fast inference on diverse hardware.
- Try Gemma 2 in Google AI Studio and read the technical report.
- Explore the Gemma model family, including Gemma 1, RecurrentGemma, PaliGemma, and CodeGemma, each optimized for specific use cases.
Price
Gemma models are open-source and free to use. You can access Gemma models on Kaggle, Vertex AI Model Garden, and Hugging Face Models.
Helpful Tips
- Discover quickstarts on Kaggle and try low-rank adaptation with JAX via Keras 3.
- Train and deploy on Google Cloud, with end-to-end TPU optimization for market-leading performance and total cost of ownership on Vertex.
- Explore partner quick-start guides, including Hugging Face, NVIDIA, LangChain, Anyscale, and MongoDB.
Frequently Asked Questions
- How do I access Gemma models?
- You can access Gemma models on Kaggle, Vertex AI Model Garden, and Hugging Face Models.
- How do I use Gemma models responsibly?
- Gemma models are designed with responsibility in mind, incorporating comprehensive safety measures and transparent reporting to empower safe and responsible AI development.
- Can I use Gemma models for academic research?
- Yes, Gemma models are optimized for Google Cloud, and you can apply for Google Cloud credits to accelerate your research.