Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Gemini Integration release 0.4.7 breaks usage with LATS Agent Worker #17743

Closed
gurveervirk opened this issue Feb 7, 2025 · 5 comments · Fixed by #17747
Closed

[Bug]: Gemini Integration release 0.4.7 breaks usage with LATS Agent Worker #17743

gurveervirk opened this issue Feb 7, 2025 · 5 comments · Fixed by #17747
Labels
bug Something isn't working triage Issue needs to be triaged/prioritized

Comments

@gurveervirk
Copy link

Bug Description

Using .achat and .astructured_predict with this release's Gemini LLM integration throws the following error:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
[<ipython-input-25-7e90319d8af1>](https://localhost:8080/#) in <cell line: 1>()
----> 1 candidates = await Settings.llm.astructured_predict(
      2     Candidates,
      3     prompt=agent_worker.candiate_expansion_prompt,
      4     query=input,
      5     conversation_history="Find a functionality that allows retrieving data from a URL",

5 frames
[/usr/local/lib/python3.11/dist-packages/llama_index/core/instrumentation/dispatcher.py](https://localhost:8080/#) in async_wrapper(func, instance, args, kwargs)
    365             )
    366             try:
--> 367                 result = await func(*args, **kwargs)
    368             except BaseException as e:
    369                 self.event(SpanDropEvent(span_id=id_, err_str=str(e)))

[/usr/local/lib/python3.11/dist-packages/llama_index/core/llms/llm.py](https://localhost:8080/#) in astructured_predict(self, output_cls, prompt, llm_kwargs, **prompt_args)
    429         )
    430 
--> 431         result = await program.acall(llm_kwargs=llm_kwargs, **prompt_args)
    432         dispatcher.event(LLMStructuredPredictEndEvent(output=result))
    433         return result

[/usr/local/lib/python3.11/dist-packages/llama_index/core/program/function_program.py](https://localhost:8080/#) in acall(self, llm_kwargs, *args, **kwargs)
    176         tool = get_function_tool(self._output_cls)
    177 
--> 178         agent_response = await self._llm.apredict_and_call(
    179             [tool],
    180             chat_history=self._prompt.format_messages(llm=self._llm, **kwargs),

[/usr/local/lib/python3.11/dist-packages/llama_index/core/instrumentation/dispatcher.py](https://localhost:8080/#) in async_wrapper(func, instance, args, kwargs)
    365             )
    366             try:
--> 367                 result = await func(*args, **kwargs)
    368             except BaseException as e:
    369                 self.event(SpanDropEvent(span_id=id_, err_str=str(e)))

[/usr/local/lib/python3.11/dist-packages/llama_index/core/llms/function_calling.py](https://localhost:8080/#) in apredict_and_call(self, tools, user_msg, chat_history, verbose, allow_parallel_tool_calls, error_on_no_tool_call, error_on_tool_error, **kwargs)
    255         )
    256 
--> 257         tool_calls = self.get_tool_calls_from_response(
    258             response, error_on_no_tool_call=error_on_no_tool_call
    259         )

[/usr/local/lib/python3.11/dist-packages/llama_index/llms/gemini/base.py](https://localhost:8080/#) in get_tool_calls_from_response(self, response, error_on_no_tool_call, **kwargs)
    374         if len(tool_calls) < 1:
    375             if error_on_no_tool_call:
--> 376                 raise ValueError(
    377                     f"Expected at least one tool call, but got {len(tool_calls)} tool calls."
    378                 )

ValueError: Expected at least one tool call, but got 0 tool calls.

Is this still supported or do we have to now use another technique?

Version

0.12.16

Steps to Reproduce

Using .achat (in LATS) and .astructured_predict with this release's Gemini LLM integration, specifically in the base implementation of LATS agent (but, has been tested with the base .astructured_predict)

Relevant Logs/Tracbacks

@gurveervirk gurveervirk added bug Something isn't working triage Issue needs to be triaged/prioritized labels Feb 7, 2025
Copy link

dosubot bot commented Feb 7, 2025

The error you're encountering with the Gemini Integration release 0.4.7 seems to be related to the handling of tool calls. Specifically, the get_tool_calls_from_response function is raising a ValueError because it expects at least one tool call, but none are being returned. This behavior is controlled by the error_on_no_tool_call parameter, which is set to True in your case, causing the exception to be raised when no tool calls are found [1][2].

In version 0.4.7, the Gemini LLM integration added support for basic function calls, which might have introduced changes in how tool calls are processed [3]. If your current implementation relies on tool calls being present, you might need to adjust your code to handle cases where no tool calls are returned, or modify the error_on_no_tool_call parameter to False to avoid the exception.

If this change in behavior is not suitable for your use case, you may need to explore alternative techniques or configurations that align with the new integration's capabilities.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@logan-markewich
Copy link
Collaborator

Actually, this isn't a bug. Gemini was updated to use their official tool calling API, and in this case, it did not call the tool to get a structured output.

You'll need to adjust your tool name/description and prompt.

I don't think gemini has a way to force the tool call, but lemme check

@logan-markewich
Copy link
Collaborator

Hmm, looks like the do have a forced tool call API, lemme integrate this

@logan-markewich
Copy link
Collaborator

pip install -U llama-index-llms-gemini

@gurveervirk
Copy link
Author

I'll check it out and let you know. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working triage Issue needs to be triaged/prioritized
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants