News

For enterprise teams and commercial developers, this means the model can be embedded in products or fine-tuned.
Google DeepMind Staff AI Developer Relations Engineer Omar Sanseviero said in a post on X that Gemma 3 270M is open-source ...
A tiny model trained on trillions of tokens, ready for specialized tasks Google has unveiled a pint-sized new addition to its ...
Google has launched Gemma 3 270M, a compact 270-million-parameter AI model designed for efficient, task-specific fine-tuning ...
Google introduces Gemma 3 270M, a new compact AI model with 270 million parameters that companies can fine-tune for specific tasks. The model promises ...
Google released its first Gemma 3 open models earlier this year, featuring between 1 billion and 27 billion parameters. In ...
Investing.com -- Google has introduced Gemma 3 270M, a compact AI model designed specifically for task-specific fine-tuning with built-in instruction-following capabilities.
Gemma 3, which has the same processing power as larger Gemini 2.0 models, remains best used by smaller devices like phones and laptops. The new model has four sizes: 1B, 4B, 12B and 27B parameters.
The latest Gemma model is aimed primarily at developers who need to create AI to run in various environments, be it a data center or a smartphone. And you can tinker with Gemma 3 right now.
Google Gemma 3 is part of an industry trend where companies are working on Large Language Models (Gemini, in Google’s case) and simultaneously pushing out small language models (SLMs), as well.
Gemma 3 1B can run on either the CPU or the GPU of a mobile device with at least 4GB of memory for best performance. The model is available for download from HuggingFace under Google's usage license.