Skip to content

IlmaxRehman/DisasterVisionAI

Repository files navigation

DisasterVisionAI

DisasterVisionAI is an AI-based disaster detection system capable of detecting fire and person in images, videos, or webcam feeds. It uses a YOLOv8 model for inference and supports real-time detection with annotated outputs and JSON export.

🔹 Dependencies

Install the required Python packages:

Python 3.8+ recommended

pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu # CPU version pip install opencv-python pip install ultralytics pip install numpy pip install pyyaml pip install requests

Optional (GPU version):

pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121

🔹 Running Inference

  1. Webcam detection python main.py --mode inference --input webcam --model_path merged_model.pt

Opens your webcam feed.

Press q to quit.

Annotated video shown live; detections saved in JSON (if configured).

  1. Video file detection python main.py --mode inference --input test_videos/video1.mp4 --model_path merged_model.pt

Replace video1.mp4 with your video file.

Annotated video saved to output/ folder (or as defined in code).

Press q to stop early.

  1. Image detection python main.py --mode inference --input test_images/img1.jpg --model_path merged_model.pt

Replace img1.jpg with your image file.

Annotated image saved automatically.

JSON export includes detected bounding boxes and classes.

  1. Direct merged model detection python detect_combined.py

Default: uses your webcam feed.

Change source variable in code for video/image input.

Annotated frames shown live.

Press q to quit.

🔹 Training / Retraining YOLO Model

If you want to retrain the model on your own dataset:

Prepare your dataset in YOLO format (images + labels):

datasets/combined/ ├── images/ │ ├── train/ │ ├── val/ │ └── test/ └── labels/ ├── train/ ├── val/ └── test/

Open yolo_model.py and call the train() function, or use Python console:

from yolo_model import YOLOv8Model

model = YOLOv8Model() model.train() # This will train on datasets/combined and save best.pt

After training, load your custom weights for inference:

model.load_weights("best.pt")

Now run inference as usual with main.py or detect_combined.py.

🔹 Notes

merged_model.pt must exist in the project root; otherwise, the system falls back to pretrained YOLOv8.

JSON export contains detection bounding boxes and class names for each frame.

Video frame resolution is automatically detected from the input file.

Ensure proper OpenCV display support (GUI windows) for showing annotated frames.

Training in yolo_model.py is a simplified demo; for real performance, expand dataset and adjust YOLO training hyperparameters.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages