Google (Gemini)
Use Gemini 2.0, Gemini 1.5 Pro, and Flash models
Google Generative AI
Gemini 2.0 Flash, Gemini 1.5 Pro
Google's Gemini models. Excellent for multimodal tasks with massive context windows.
Setup
1. Install Packages
npm install @yourgpt/copilot-sdk @yourgpt/llm-sdk openaiGoogle Gemini uses an OpenAI-compatible API, so we use the openai SDK.
2. Get API Key
Get your API key from Google AI Studio
3. Add Environment Variable
GOOGLE_API_KEY=...4. Usage
import { generateText } from '@yourgpt/llm-sdk';
import { google } from '@yourgpt/llm-sdk/google';
const result = await generateText({
model: google('gemini-2.0-flash'),
prompt: 'Explain machine learning.',
});
console.log(result.text);5. Streaming (API Route)
import { streamText } from '@yourgpt/llm-sdk';
import { google } from '@yourgpt/llm-sdk/google';
export async function POST(req: Request) {
const { messages } = await req.json();
const result = await streamText({
model: google('gemini-2.0-flash'),
system: 'You are a helpful assistant.',
messages,
});
return result.toTextStreamResponse();
}Available Models
// Gemini 2.5 (Experimental - Latest)
google('gemini-2.5-pro-preview-05-06') // Most capable, 1M context
google('gemini-2.5-flash-preview-05-20') // Fast and powerful
// Gemini 2.0
google('gemini-2.0-flash') // Fast and capable
google('gemini-2.0-flash-lite') // Most efficient
google('gemini-2.0-flash-exp') // Experimental
// Gemini 1.5
google('gemini-1.5-pro') // Best quality, 2M context
google('gemini-1.5-flash') // Fast and cheap
google('gemini-1.5-flash-8b') // LightweightConfiguration Options
import { google } from '@yourgpt/llm-sdk/google';
// Custom API key
const model = google('gemini-2.0-flash', {
apiKey: 'custom-api-key',
});
// With generation options
const result = await generateText({
model: google('gemini-2.0-flash'),
prompt: 'Hello',
temperature: 0.7, // 0-1
maxTokens: 8192, // Max response length
});Tool Calling
import { generateText, tool } from '@yourgpt/llm-sdk';
import { google } from '@yourgpt/llm-sdk/google';
import { z } from 'zod';
const result = await generateText({
model: google('gemini-2.0-flash'),
prompt: 'Search our knowledge base for AI topics',
tools: {
searchKnowledge: tool({
description: 'Search internal knowledge base',
parameters: z.object({
query: z.string(),
category: z.string().optional(),
}),
execute: async ({ query, category }) => {
return await searchKnowledge(query, category);
},
}),
},
maxSteps: 5,
});Massive Context Window
Gemini supports 1M+ token context:
const result = await generateText({
model: google('gemini-1.5-pro'),
system: `Here is the entire codebase:
${entireCodebase} // Can be 500K+ tokens!
Help users understand and modify this code.`,
prompt: userQuestion,
});Multimodal (Images, Video, Audio)
Gemini excels at multimodal understanding:
const result = await generateText({
model: google('gemini-2.0-flash'),
messages: [
{
role: 'user',
content: [
{ type: 'text', text: "What's in this image?" },
{ type: 'image', image: imageBase64 },
],
},
],
});With Copilot UI
Use with the Copilot React components:
'use client';
import { CopilotProvider } from '@yourgpt/copilot-sdk/react';
export function Providers({ children }: { children: React.ReactNode }) {
return (
<CopilotProvider runtimeUrl="/api/chat">
{children}
</CopilotProvider>
);
}Pricing
| Model | Input | Output |
|---|---|---|
| gemini-2.0-flash | Free tier available | See pricing |
| gemini-1.5-pro | $1.25/1M tokens | $5/1M tokens |
| gemini-1.5-flash | $0.075/1M tokens | $0.30/1M tokens |
Very competitive pricing. Check Google AI pricing for current rates.
Next Steps
- xAI - Ultra-fast inference
- generateText() - Full API reference
- tool() - Define tools with Zod