Skip to content

Google Gemini

Lack of transparency

Google does not disclose any information about model architecture and inference infrastrucure. Thus, the environmental impacts are estimated with a very low precision.

This guide focuses on the integration of 🌱 EcoLogits with the Google Gemini official python client .

Official links:

Installation

To install EcoLogits along with all necessary dependencies for compatibility with the Google Gemini client, please use the google-genai extra-dependency option as follows:

pip install ecologits[google-genai]

This installation command ensures that EcoLogits is set up with the specific libraries required to interface seamlessly with Google Gemini Python client.

Chat Completions

Example

Integrating EcoLogits with your applications does not alter the standard outputs from the API responses. Instead, it enriches them by adding the Impacts object, which contains detailed environmental impact data.

from ecologits import EcoLogits
from google import genai

# Initialize EcoLogits
EcoLogits.init(providers=["google_genai"])

# Make generate content request
client = genai.Client()
response = client.models.generate_content( # (1)!
    model="gemini-2.0-flash-001",
    contents="Tell me a joke!"
)

# Get estimated environmental impacts of the inference
print(response.impacts)
  1. Also compatible with the "chat API", see example below:
    chat = client.chats.create(model='gemini-2.0-flash-001')
    response = chat.send_message('Tell me a joke!')
    print(response.impacts)
    
import asyncio
from ecologits import EcoLogits
from google import genai

# Initialize EcoLogits
EcoLogits.init(providers=["google_genai"])

# Make generate content request
async def main() -> None:
    client = genai.Client()
    response = await client.aio.models.generate_content( # (1)!
        model="gemini-2.0-flash-001",
        contents="Tell me a joke!"
    )

    # Get estimated environmental impacts of the inference
    print(response.impacts)

asyncio.run(main())
  1. Also compatible with the "chat API", see example below:
    chat = client.aio.chats.create(model='gemini-2.0-flash-001')
    response = await chat.send_message('Tell me a joke!')
    print(response.impacts)
    

Streaming example

In streaming mode, the impacts are calculated in the last chunk for the entire request.

from ecologits import EcoLogits
from google import genai

# Initialize EcoLogits
EcoLogits.init(providers=["google_genai"])

# Make generate content request
client = genai.Client()
stream = client.models.generate_content_stream( # (1)!
    model="gemini-2.0-flash-001",
    contents="Tell me a joke!"
)

# Get cumulative estimated environmental impacts of the inference
for chunk in stream:
    if chunk.impacts is not None:
        print(chunk.impacts)
  1. Also compatible with the "chat API", see example below:
    chat = client.chats.create(model='gemini-2.0-flash-001')
    stream = chat.send_message_stream("Tell me long a joke!")
    for chunk in stream:
        if chunk.impacts is not None:
            print(chunk.impacts)
    
import asyncio
from ecologits import EcoLogits
from google import genai

# Initialize EcoLogits
EcoLogits.init(providers=["google_genai"])

# Ask something to Google Gemini in streaming and async mode
async def main() -> None:
# Make generate content request
    client = genai.Client()
    stream = await client.aio.models.generate_content_stream( # (1)!
        model="gemini-2.0-flash-001",
        contents="Tell me a joke!"
    )

    # Get cumulative estimated environmental impacts of the inference
    async for chunk in stream:
        if chunk.impacts is not None:
            print(chunk.impacts)

asyncio.run(main())
  1. Also compatible with the "chat API", see example below:
    chat = client.aio.chats.create(model='gemini-2.0-flash-001')
    stream = await chat.send_message_stream("Tell me long a joke!")
    async for chunk in stream:
        if chunk.impacts is not None:
            print(chunk.impacts)