fix(llamaindex): handle None content in StructuredLLM responses (#3513)#3665
fix(llamaindex): handle None content in StructuredLLM responses (#3513)#3665Kash6 wants to merge 2 commits intotraceloop:mainfrom
Conversation
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed everything up to 69d0e79 in 10 seconds. Click for details.
- Reviewed
167lines of code in2files - Skipped
0files when reviewing. - Skipped posting
0draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
Workflow ID: wflow_4LB7l8lq0vXY3TI6
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
📝 WalkthroughWalkthroughThis PR fixes a bug in the LlamaIndex instrumentation where Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Hi @nirga @galkleinman @dinmukhamedm! Just wanted to gently ping this PR. I've updated the branch to be in sync with main. The original issue reporter cay89 confirmed the fix works for them in the comments. Happy to make any changes if needed. Thanks for maintaining this project! |
Fixes #3513
When using StructuredLLM with .complete() or .acomplete(), the response.message.content can be None because structured output goes to response.raw instead. This caused OpenTelemetry warnings:
Added None checks before setting completion content attributes in span_utils.py.
feat(instrumentation): ...orfix(instrumentation): ....Important
Add
Nonechecks inspan_utils.pyto handleNonecontent inStructuredLLMresponses, preventing OpenTelemetry warnings.Nonechecks inset_llm_chat_responseandset_llm_predict_responseinspan_utils.pyto prevent setting attributes withNonevalues.test_none_content_fix.pyto verifyNonehandling inset_llm_chat_responseandset_llm_predict_response.Noneand are set correctly when notNone.This description was created by
for 69d0e79. You can customize this summary. It will automatically update as commits are pushed.
Summary by CodeRabbit
Bug Fixes
Tests