Hi tctco,
It's Qukoyk again.
How are you feeling better now? Is your studies going smoothly?
Thank you very much for your previous reply. Although I'm not sure what the reason was, the previous issue disappeared. But today there's some good news that I think is worth reporting to the upstream (you), so I've opened a new issue.
Just like last time when the AI helped me deploy Docker, this time the evolved LLM has helped me solve the problem I encountered.
Yes, now I can run the tracking smoothly and execute segCluster.
I had the AI summarize the modified content. I know that the current AI code quality may not be sufficient to submit a qualified and elegant PR, but perhaps some of the changes might help you save time. Therefore, I chose to provide feedback via an issue. The body is as follows:
Bug Fixes Summary
This document summarizes four bugs that were identified and fixed in the STCS project (3 backend, 1 frontend).
Bug #1: AttributeError in classifier.py - Incorrect dataset path for val/test dataloaders
Error Message
AttributeError: 'ConfigDict' object has no attribute 'dataset'
Location
backend/app/algorithm/classifier.py, line 144-145
Root Cause
In the _build_config method, the code incorrectly accessed val_dataloader.dataset.dataset.pipeline and test_dataloader.dataset.dataset.pipeline when inserting a Resize transformer.
The configuration structure is different for train vs. val/test dataloaders:
- Train dataloader: Uses
ClassBalancedDataset wrapper, so access path is .dataset.dataset.pipeline
- Val/Test dataloader: No wrapper, so access path should be
.dataset.pipeline directly
Solution
Remove the extra .dataset from val and test dataloader paths:
Before:
if scale < 1:
transformer = dict(type="Resize", scale_factor=scale)
config.train_dataloader.dataset.dataset.pipeline.insert(1, transformer)
config.val_dataloader.dataset.dataset.pipeline.insert(1, transformer) # Wrong
config.test_dataloader.dataset.dataset.pipeline.insert(1, transformer) # Wrong
After:
if scale < 1:
transformer = dict(type="Resize", scale_factor=scale)
config.train_dataloader.dataset.dataset.pipeline.insert(1, transformer)
config.val_dataloader.dataset.pipeline.insert(1, transformer) # Fixed
config.test_dataloader.dataset.pipeline.insert(1, transformer) # Fixed
Bug #2: AttributeError in pose.py - Wrong variable checked in loop
Error Message
AttributeError: 'NoneType' object has no attribute 'bboxes'
Location
backend/app/algorithm/pose.py, line 64
Root Cause
In the MMPoseTopDownEstimator.predict method, the loop iterates over images and dets together. The condition checked if dets is None: instead of if det_group is None:.
dets = the entire list of detection results
det_group = the detection result for the current frame (can be None if no detection)
When a frame has no detections, det_group is None, but the code checked dets (the list) which is never None, causing the skip condition to fail.
Solution
Change the variable name to check the correct element:
Before:
for image, det_group in zip(images, dets):
if dets is None: # Wrong: checks the whole list
continue
for box, score in zip(det_group.bboxes, det_group.scores):
After:
for image, det_group in zip(images, dets):
if det_group is None: # Fixed: checks the current element
continue
for box, score in zip(det_group.bboxes, det_group.scores):
Bug #3: AttributeError in stage1.py - Missing null check for pose when using optical flow
Error Message
AttributeError: 'NoneType' object has no attribute 'save_images'
Location
backend/app/algorithm/stage1.py, line 188
Root Cause
In the online_track function, when processing frames with optical flow enabled, the code iterates over poses list which may contain None values when no objects are detected in a frame.
The loop processes each frame's pose result, but when pose is None (no detection), the code still attempts to call:
pose.save_images()
pose.get_datum()
This causes the application to crash when encountering frames without any detected objects.
Solution
Add a null check at the beginning of the loop to skip frames with no detections:
Before:
for frame, flow, pose, frame_idx in zip(batch, flows, poses, frame_indexes):
if max_det > 1:
if enable_flow:
ids, dead_tracklets = tracker.update(flow, pose, frame_idx)
else:
ids, dead_tracklets = tracker.update(pose, frame_idx)
# ... rest of code that uses pose
After:
for frame, flow, pose, frame_idx in zip(batch, flows, poses, frame_indexes):
if pose is None:
continue
if max_det > 1:
if enable_flow:
ids, dead_tracklets = tracker.update(flow, pose, frame_idx)
else:
ids, dead_tracklets = tracker.update(pose, frame_idx)
# ... rest of code that uses pose
Bug #4: Skeleton rendering not working in VideoPlayer.tsx - Incorrect falsy check for frame 0
Error Message
(No explicit error - skeletons simply don't render on the video)
Location
frontend/src/components/VideoPlayer.tsx, line 481
Root Cause
In the Plotter component's useEffect hook, the condition !props.currFrame was used to check if the current frame is valid. However, this condition returns true when currFrame is 0 (the first frame of the video), because 0 is a falsy value in JavaScript.
This caused the skeleton rendering to be skipped when the video is at frame 0, preventing users from seeing the pose tracking results.
Solution
Change the falsy check to an explicit null/undefined check:
Before:
useEffect(() => {
if (
!props.trackData ||
!props.currFrame || // Wrong: 0 is falsy
props.currFrame + props.frameShift >= props.interval[1] ||
props.currFrame + props.frameShift < props.interval[0]
)
return;
// ... rendering logic
After:
useEffect(() => {
if (
!props.trackData ||
props.currFrame == null || // Fixed: only null/undefined
props.currFrame + props.frameShift >= props.interval[1] ||
props.currFrame + props.frameShift < props.interval[0]
)
return;
// ... rendering logic
Deployment
After applying the fixes, restart the corresponding services:
cd /home/noldus/Codes/STCS
# For backend fixes (Bug #1, #2, #3)
docker compose restart backend worker
# For frontend fix (Bug #4)
docker compose restart frontend
# Or restart all
docker compose restart backend worker frontend
Environment
The following bugs were identified and fixed in this environment:
| Component |
Information |
| Operating System |
Ubuntu 24.04.3 LTS (WSL2 on Windows) |
| Kernel |
6.6.87.2-microsoft-standard-WSL2 |
| CPU |
Intel Xeon E5-1603 0 @ 2.80GHz (4 cores) |
| Memory |
16 GB RAM |
| GPU |
NVIDIA GeForce GTX 1660 (6 GB VRAM) |
| NVIDIA Driver |
560.94 |
| Docker |
29.1.5 |
| Docker Compose |
5.0.2 |
Hi tctco,
It's Qukoyk again.
How are you feeling better now? Is your studies going smoothly?
Thank you very much for your previous reply. Although I'm not sure what the reason was, the previous issue disappeared. But today there's some good news that I think is worth reporting to the upstream (you), so I've opened a new issue.
Just like last time when the AI helped me deploy Docker, this time the evolved LLM has helped me solve the problem I encountered.
Yes, now I can run the tracking smoothly and execute segCluster.
I had the AI summarize the modified content. I know that the current AI code quality may not be sufficient to submit a qualified and elegant PR, but perhaps some of the changes might help you save time. Therefore, I chose to provide feedback via an issue. The body is as follows:
Bug Fixes Summary
This document summarizes four bugs that were identified and fixed in the STCS project (3 backend, 1 frontend).
Bug #1: AttributeError in classifier.py - Incorrect dataset path for val/test dataloaders
Error Message
Location
backend/app/algorithm/classifier.py, line 144-145Root Cause
In the
_build_configmethod, the code incorrectly accessedval_dataloader.dataset.dataset.pipelineandtest_dataloader.dataset.dataset.pipelinewhen inserting a Resize transformer.The configuration structure is different for train vs. val/test dataloaders:
ClassBalancedDatasetwrapper, so access path is.dataset.dataset.pipeline.dataset.pipelinedirectlySolution
Remove the extra
.datasetfrom val and test dataloader paths:Before:
After:
Bug #2: AttributeError in pose.py - Wrong variable checked in loop
Error Message
Location
backend/app/algorithm/pose.py, line 64Root Cause
In the
MMPoseTopDownEstimator.predictmethod, the loop iterates overimagesanddetstogether. The condition checkedif dets is None:instead ofif det_group is None:.dets= the entire list of detection resultsdet_group= the detection result for the current frame (can beNoneif no detection)When a frame has no detections,
det_groupisNone, but the code checkeddets(the list) which is neverNone, causing the skip condition to fail.Solution
Change the variable name to check the correct element:
Before:
After:
Bug #3: AttributeError in stage1.py - Missing null check for pose when using optical flow
Error Message
Location
backend/app/algorithm/stage1.py, line 188Root Cause
In the
online_trackfunction, when processing frames with optical flow enabled, the code iterates overposeslist which may containNonevalues when no objects are detected in a frame.The loop processes each frame's
poseresult, but whenposeisNone(no detection), the code still attempts to call:pose.save_images()pose.get_datum()This causes the application to crash when encountering frames without any detected objects.
Solution
Add a null check at the beginning of the loop to skip frames with no detections:
Before:
After:
Bug #4: Skeleton rendering not working in VideoPlayer.tsx - Incorrect falsy check for frame 0
Error Message
Location
frontend/src/components/VideoPlayer.tsx, line 481Root Cause
In the
Plottercomponent'suseEffecthook, the condition!props.currFramewas used to check if the current frame is valid. However, this condition returnstruewhencurrFrameis0(the first frame of the video), because0is a falsy value in JavaScript.This caused the skeleton rendering to be skipped when the video is at frame 0, preventing users from seeing the pose tracking results.
Solution
Change the falsy check to an explicit null/undefined check:
Before:
After:
Deployment
After applying the fixes, restart the corresponding services:
Environment
The following bugs were identified and fixed in this environment: