56 results
DEC 12, 2024 / Cloud
Google Cloud Next 2025, happening April 9-11 in Las Vegas, will feature expanded developer content, interactive experiences, and opportunities to connect with peers and Google experts.
DEC 12, 2024 / Android
The Android XR SDK, a new platform for building extended reality (XR) experiences on Android, is now available for devs to try, and to give feedback.
DEC 11, 2024 / Gemini
Gemini 2.0 Flash has enhanced capabilities like multimodal outputs and native tool use, and introduces new coding agents to improve developer productivity, now available for testing in Google AI Studio.
DEC 05, 2024 / Gemma
PaliGemma 2, the next evolution in tunable vision-language models, comes with new features such as scalable performance, long captioning, and expanded capabilities. Get started with pre-trained models, documentation, and tutorials.
NOV 25, 2024 / Cloud
The Google Developer Program premium membership offers benefits such as Google Cloud credits, certification vouchers, and access to Cloud Skills Boost.
NOV 25, 2024 / Gemini
Explore real-world applications of Gemini's multimodal AI capabilities, from detailed image descriptions, information extraction, object detection, video summarization, and more.
NOV 21, 2024 / Mobile
The winners of the Gemini API Developer Competition showcased the potential of the Gemini API in creating impactful solutions, from AI-powered personal assistants to tools for accessibility and creativity.
NOV 19, 2024 / Firebase
Explore Firebase's new AI-powered app development tools and resources, including demos, documentation, and best practices at Firebase Demo Day 2024.
NOV 14, 2024 / Gemini
The integration of Gemini's 1.5 models with Sublayer's Ruby-based AI agent framework enables developer teams to automate their documentation process, streamline workflows, and build AI-driven applications.
NOV 13, 2024 / Gemma
vLLM's continuous batching and Dataflow's model manager optimizes LLM serving and simplifies the deployment process, delivering a powerful combination for developers to build high-performance LLM inference pipelines more efficiently.