Skip to content

Comments

fix(llamaindex): handle None content in StructuredLLM responses (#3513)#3665

Open
Kash6 wants to merge 2 commits intotraceloop:mainfrom
Kash6:fix/llamaindex-structured-llm-none-content
Open

fix(llamaindex): handle None content in StructuredLLM responses (#3513)#3665
Kash6 wants to merge 2 commits intotraceloop:mainfrom
Kash6:fix/llamaindex-structured-llm-none-content

Conversation

@Kash6
Copy link

@Kash6 Kash6 commented Feb 6, 2026

Fixes #3513

When using StructuredLLM with .complete() or .acomplete(), the response.message.content can be None because structured output goes to response.raw instead. This caused OpenTelemetry warnings:

Invalid type NoneType for attribute 'gen_ai.completion.0.content'

Added None checks before setting completion content attributes in span_utils.py.

  • I have added tests that cover my changes.
  • If adding a new instrumentation or changing an existing one, I've added screenshots from some observability platform showing the change.
  • PR name follows conventional commits format: feat(instrumentation): ... or fix(instrumentation): ....
  • (If applicable) I have updated the documentation accordingly.

Important

Add None checks in span_utils.py to handle None content in StructuredLLM responses, preventing OpenTelemetry warnings.

  • Behavior:
    • Add None checks in set_llm_chat_response and set_llm_predict_response in span_utils.py to prevent setting attributes with None values.
  • Tests:
    • Add test_none_content_fix.py to verify None handling in set_llm_chat_response and set_llm_predict_response.
    • Tests ensure attributes are not set when content or output is None and are set correctly when not None.

This description was created by Ellipsis for 69d0e79. You can customize this summary. It will automatically update as commits are pushed.

Summary by CodeRabbit

  • Bug Fixes

    • Improved telemetry data collection by preventing null content values from being recorded in LLM response spans when content is unavailable, ensuring more accurate and cleaner trace data.
  • Tests

    • Added comprehensive test coverage for null content handling in LLM response scenarios.

@CLAassistant
Copy link

CLAassistant commented Feb 6, 2026

CLA assistant check
All committers have signed the CLA.

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed everything up to 69d0e79 in 10 seconds. Click for details.
  • Reviewed 167 lines of code in 2 files
  • Skipped 0 files when reviewing.
  • Skipped posting 0 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.

Workflow ID: wflow_4LB7l8lq0vXY3TI6

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

@coderabbitai
Copy link

coderabbitai bot commented Feb 6, 2026

📝 Walkthrough

Walkthrough

This PR fixes a bug in the LlamaIndex instrumentation where None values for LLM response content were being set as span attributes, causing OpenTelemetry validation warnings when using StructuredLLM. The fix adds null checks before setting content attributes in two response handler functions and includes comprehensive test coverage.

Changes

Cohort / File(s) Summary
Null Check Implementation
packages/opentelemetry-instrumentation-llamaindex/opentelemetry/instrumentation/llamaindex/span_utils.py
Modified set_llm_chat_response and set_llm_predict_response to conditionally set content attributes only when values are not None, preventing OpenTelemetry validation errors.
Test Coverage
packages/opentelemetry-instrumentation-llamaindex/tests/test_none_content_fix.py
New test module verifying None content handling in both response handlers using mocked spans and events, ensuring role attributes are set while content is guarded against None values.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

Poem

🐰 With whiskers twitched and checks in place,
None values won't pollute the trace,
StructuredLLM now flows so clean,
No warnings in the logs we've seen!
A gentle guard for every span,
The safest instrumentation plan.

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically describes the main change: fixing None content handling in StructuredLLM responses for LlamaIndex instrumentation.
Linked Issues check ✅ Passed The PR addresses all coding requirements from issue #3513: adding None checks before setting completion content attributes to prevent OpenTelemetry validation warnings.
Out of Scope Changes check ✅ Passed All changes are directly related to issue #3513: modified span_utils.py to add None checks and added corresponding test coverage for the fix.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@Kash6
Copy link
Author

Kash6 commented Feb 14, 2026

Hi @nirga @galkleinman @dinmukhamedm!

Just wanted to gently ping this PR. I've updated the branch to be in sync with main. The original issue reporter cay89 confirmed the fix works for them in the comments.

Happy to make any changes if needed. Thanks for maintaining this project!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

🐛 Bug Report: [LlamaIndex] NoneType error for gen_ai.completion.0.content when using StructuredLLM

2 participants