π M.Tech in Artificial Intelligence & Machine Learning
π¬ AI Researcher (in transition) | LLMs | GenAI | MLOps | Multimodal | Responsible AI | ML System Optimization
βοΈ 14+ Years of Experience in Cloud & DevOps | π§ Building AI Systems that Scale
βI donβt just build models. I build the systems β and the trust β that make AI research impactful.β
Iβm an AI researcher in the making, blending Cloud & DevOps leadership (14+ years) with a deep curiosity for frontier AI research.
I specialize in:
- π€ Fine-tuning & optimizing LLMs and transformer-based architectures
- π§ Exploring multimodal learning (language + vision)
- βοΈ Architecting infrastructure for large-scale distributed training
- π§ͺ Building reproducible MLOps workflows for AI research
- β‘ Optimizing ML systems for performance, efficiency & scalability
- βοΈ Integrating principles of Fair, Interpretable & Trustworthy ML
With a strong engineering backbone, I thrive at the intersection of AI research, system design, and responsible innovation.
| Domain π§ | Focus Areas β¨ |
|---|---|
| π LLMs & GenAI | Pre-training, fine-tuning, LoRA, RAG, evaluation, domain adaptation |
| πΌοΈ Computer Vision | Vision Transformers (ViTs), representation learning, multimodal fusion |
| β‘ ML System Optimization | Distributed training, model efficiency, quantization, serving, cost & latency tuning |
| π§ Responsible AI (FAccT) | Fairness, interpretability, transparency, explainability, bias mitigation |
| π§ͺ Research Infrastructure & MLOps | Experiment tracking, scaling, reproducibility, containerized workflows |
| π§ Project | π Description | π§° Focus | π§ͺ Stack |
|---|---|---|---|
| llm-finetune-lora | LoRA fine-tuning of LLMs for domain-specific tasks | LLM, NLP, Optimization | PyTorch Β· HuggingFace Β· LoRA |
| multimodal-ai-lab | Exploring joint learning from text & image inputs | Multimodal Learning | Transformers Β· OpenCV Β· PyTorch |
| fair-ml-evaluation | Building a pipeline to evaluate ML models for fairness & interpretability | Responsible AI (FAccT) | AIF360 Β· SHAP Β· Sklearn |
| mlops-for-research | Reproducible experiment orchestration at scale | MLOps | MLFlow Β· K8s Β· GitHub Actions |
| ml-system-optimization | Experiments with distributed training, quantization & inference acceleration | ML System Optimization | PyTorch Β· CUDA Β· AWS |
| distributed-training-infra | Cloud infra setup for distributed AI training | AI Infra | Terraform Β· Kubernetes Β· AWS Batch |
π (Flagship repos will be pinned as they mature β stay tuned.)
- π§ 14+ years designing scalable cloud & DevOps solutions for global enterprises.
- π§ M.Tech in AI/ML with a research focus on LLMs, Multimodal AI, ML System Optimization, and Responsible AI.
- π€ Expertise in model training, fine-tuning, optimization, and deployment at scale.
- β‘ Specialized in distributed AI, system efficiency, and inference acceleration.
- βοΈ Passionate about building fair, interpretable, and trustworthy ML systems.
- π§ͺ Believer in open research, reproducibility, and engineering rigor.
- βοΈ Fine-tuning and adapting LLMs for specialized domains
- πΌοΈ Multimodal AI: bridging language and vision
- β‘ ML System Optimization: quantization, LoRA, distillation, serving, performance tuning
- π§ MLOps for reproducible research
- βοΈ Fair, Interpretable, and Trustworthy ML (FAccT)
I aspire to grow as an AI Researcher who codes β
someone who contributes to scientific advances while building the infrastructure and optimizations that make those advances scalable, efficient, and trustworthy in the real world.
β βGreat models are built twice β once in research, and once again in optimized, responsible systems.β

