Google

Google Gemini 1.5 Pro with 2 million tokens opens for developers

Published

on

Google today opened Gemini 1.5 Pro large language model (LLM) for all developers across the globe. The new model provides access to 2 million context windows for Gemini, code execution, and added Gemma 2 in Google AI Studio.

The company announced all the latest Gemini 1.5 Pro at I/O 2024 behind a waitlist. It also introduced context caching in the Gemini API for 1.5 Pro and 1.5 Flash.

Advertisement

Gemini 1.5 Pro (Image Source – Google)

Google has enabled code execution for Gemini 1.5 Pro and 1.5 Flash. Once enabled, the code execution feature will be utilized by the model to generate and run Python code and learn iteratively from the results until it produces satisfactory output.

Advertisement

The execution sandbox is not connected to the internet and comes standard with a few numerical libraries and developers are billed based on the output tokens from the model. The new code execution is available today via the Gemini API and in Google AI Studio under “advanced settings”.

Google also announced that it’s further tuning Gemini 1.5 Flash for developers and to enable new use cases. Text tuning in 1.5 Flash is now ready for red-teaming and will be rolling out gradually to developers starting today. Developers will be able to access Gemini 1.5 Flash via the Gemini API and in Google AI Studio by mid-July.

Advertisement

(source)

Advertisement
Comments
Exit mobile version