Catenaa, Friday, December 13, 2024 – Google launched its advanced AI model, Gemini 2.0, this week, featuring multimodal capabilities and a focus on transforming AI chatbots into AI agents.
The update includes improved reasoning, native image and text-to-speech outputs, and advanced context handling for research tasks.
Google highlighted the company’s vision of creating universal AI assistants.
Users can access Gemini 2.0 through Google AI Studio, supporting up to 1 million tokens of context and customizable research methodologies.
Gemini Advanced introduces a “Deep Research” feature, offering extensive topic exploration and report generation.
Google also showcased Project Astra, an AI assistant powered by Gemini 2.0, enabling real-time interactions via smartphones. Features include multilingual conversations, Google Search integration, and enhanced memory capabilities. Astra marks Google’s response to Meta’s AI developments.
Meanwhile, Anthropic unveiled Claude 3.5 Haiku, an update boasting superior coding and data processing abilities at competitive pricing.
Designed for enterprise use, the model offers significant cost savings and excels in multilingual and technical tasks.
Google’s Gemini 2.0 launch coincides with OpenAI’s “12 Days of Christmas” campaign, showcasing the fierce competition in the AI space. OpenAI recently introduced new reasoning models and a $200 Pro subscription.
Google plans to expand Gemini 2.0 integration across its ecosystem in 2024, including broader Search AI accessibility, targeting over 1 billion users.