Skip to content

Add guardrails for LLM-generated SQL and AI-driven schema changes #9

@Mukku27

Description

@Mukku27

Issue class: Enhancement
Severity: High
Category: ai-safety
Repository: Mukku27/Inventory-Management-Using-GenAI
Affected files or impacted area: natural-language SQL, Excel/LLM schema mapping, write operations

Summary

Add runtime guardrails for LLM-generated SQL and AI-driven column mapping before this project is positioned as production-ready.

Why it matters

The current design allows model output to influence SQL execution and schema mutation in an inventory system. Without validation, confirmation, and policy boundaries, this is unsafe for real data.

Technical evidence or justification

  • app.py:136-145 generates SQL from natural language and immediately executes the returned query.
  • excel_processing.py:30-36 uses model-assisted column mapping and can add new database columns based on that output.
  • There are no validation layers, allowlists, dry-run reviews, or prompt-injection mitigations in the repository.

Expected behavior or target state

AI-assisted operations should be bounded by explicit policies, validated before execution, and auditable after the fact.

Actual behavior or current gap

The repository has no guardrails around model-generated database operations.

Recommended fix

Add SQL parsing/allowlisting, separate read-only and write workflows, require human confirmation for destructive changes, validate model outputs against schema constraints, and add prompt/version management plus regression evals for AI behaviors.

Suggested GitHub labels

enhancement, severity:high, category:ai-safety

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions