Skip to content

utils

compute_llm_impacts(provider, model_name, output_token_count, request_latency)

High-level function to compute the impacts of an LLM generation request.

Parameters:

Name Type Description Default
provider str

Name of the provider.

required
model_name str

Name of the LLM used.

required
output_token_count int

Number of generated tokens.

required
request_latency float

Measured request latency.

required

Returns:

Type Description
Optional[Impacts]

The impacts of an LLM generation request.

Source code in ecologits/tracers/utils.py
def compute_llm_impacts(
    provider: str,
    model_name: str,
    output_token_count: int,
    request_latency: float,
) -> Optional[Impacts]:
    """
    High-level function to compute the impacts of an LLM generation request.

    Args:
        provider: Name of the provider.
        model_name: Name of the LLM used.
        output_token_count: Number of generated tokens.
        request_latency: Measured request latency.

    Returns:
        The impacts of an LLM generation request.
    """
    model = models.find_model(provider=provider, model_name=model_name)
    if model is None:
        # TODO: Replace with proper logging
        print(f"Could not find model `{model_name}` for {provider} provider.")
        return None
    model_active_params = model.active_parameters or _avg(model.active_parameters_range)    # TODO: handle ranges
    model_total_params = model.total_parameters or _avg(model.total_parameters_range)       # TODO: handle ranges
    return _compute_llm_impacts(
        model_active_parameter_count=model_active_params,
        model_total_parameter_count=model_total_params,
        output_token_count=output_token_count,
        request_latency=request_latency
    )