Skip to content
View mshegolev's full-sized avatar

Block or report mshegolev

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
mshegolev/README.md

Mikhail Shchegolev

QA Lead · AI Engineer · Test Automation Architect

I turn manual QA into AI-accelerated automation — LLM pipelines, MCP servers, production test platforms.

Portfolio Resume Email Location Remote Open to work


🎯 Summary

Senior Software Engineer and QA Lead with 16+ years of experience across telecom, fintech, e-commerce, and data engineering. For the last two years I have specialized in applying Large Language Models (LLM) and autonomous AI agents to production Quality Assurance (QA) workflows: automatic test generation from requirements, LLM-based quality gates for Extract-Transform-Load (ETL) pipelines, and Model Context Protocol (MCP) servers that let coding agents (Claude Code, Cursor) operate real enterprise tooling.

Currently leading a Quality Assurance organization of ~47 engineers across 10 tribes on an enterprise telecom Ordering platform.

🧭 Open to (Job titles)

AI Engineer · Machine Learning Engineer · LLM Engineer · AI / ML Platform Engineer · QA Lead · Test Automation Architect · Staff QA Engineer · Senior Test Automation Engineer · Principal SDET · Developer Relations (AI / QA)

Format: Remote-first, senior+. Based in Turkey (UTC+3). Available for full-time and contract roles.

🗣️ Languages

  • English — Professional Working Proficiency
  • Russian — Native

💼 What I do

  • 🧠 AI in Quality Assurance — pipelines that convert Jira tickets into working pytest / Allure tests, LLM-based quality gates for ETL, prompt libraries and agent patterns used by 40+ testers
  • 🔌 Model Context Protocol (MCP) servers — open-source integrations for GitLab, Jaeger, NotebookLM so coding agents (Claude Code, Cursor, OpenCode) can drive real Continuous Integration / Continuous Deployment (CI/CD) and observability tooling
  • 🏭 Test platforms from scratch — 10+ production test environments, 4+ GitLab CI/CD pipelines, blue/green deployments, contract testing, synthetic test-data factories
  • 🎤 Enablement & developer experience — internal workshops on AI for QA (47 engineers, Apr 2026), curated "Must-have AI" course track, written guides for Claude Code / Cursor adoption in QA

📊 Impact snapshot

Metric Value
Years in engineering 16+
Quality Assurance engineers enabled 47+
Test environments built from scratch 10+
CI/CD pipelines built from scratch 4+
Companies & industries 7
Production API endpoints shipped (Multi-AI Suite) 157+
Automated tests shipped (Multi-AI Suite) 530+

🚀 Currently working on

  • MCP ecosystem for QA — building Model Context Protocol servers for GitLab, Jaeger, Allure so Claude Code and Cursor can operate enterprise test infrastructure without glue code
  • AIQA Pipeline v4 — generating pytest and Allure tests directly from Jira tickets with LLM-driven spec validation and automatic quality gates
  • AI-for-QA enablement — rolling out prompt libraries, reusable agent patterns, and internal training to 47+ testers across 10 tribes

⭐ Featured Projects

Production Software-as-a-Service (SaaS) for parallel Large Language Model queries across GPT, Claude, Gemini, Groq, Perplexity. Includes a Maestro multi-analyst orchestration framework, Retrieval-Augmented Generation (RAG) document analysis, health tracking, and VPN management. 157+ REST API endpoints, 530+ automated tests, blue/green deployment.

Tech: Python FastAPI Docker SQLite OpenAI API Anthropic API Gemini API Groq Perplexity RAG GitHub Actions Blue/Green Deploy

Model Context Protocol (MCP) server that exposes GitLab CI/CD to LLM agents — pipelines, schedules, Merge Requests, files. Works with any GitLab installation (SaaS or self-hosted). Lets Claude Code and Cursor drive real Continuous Integration / Continuous Deployment (CI/CD) without glue code.

Tech: Python FastMCP Model Context Protocol GitLab API Claude Code Cursor Anthropic

🧪 AIQA Pipeline

AI-powered Quality Assurance automation: automatic test generation from Jira tasks, LLM quality gates for Extract-Transform-Load (ETL) pipelines, Allure TestOps integration, Spec-Driven Development (SDD) workflow. Deployed internally across an enterprise QA organization.

Tech: Python pytest Jira REST API Allure TestOps Large Language Models Spec-Driven Development PostgreSQL ClickHouse

Claude Code skill that queries Google NotebookLM notebooks for source-grounded, citation-backed answers from Gemini. Features persistent authentication, source upload, and library management through browser automation.

Tech: Python Claude Code Google Gemini NotebookLM Browser Automation Playwright


🛠️ Technical Skills

Artificial Intelligence / Machine Learning: Large Language Models (LLM), LLM Orchestration, LangChain, LangGraph, Retrieval-Augmented Generation (RAG), Vector Databases, Embeddings, Prompt Engineering, AI Agents, Model Context Protocol (MCP), Function Calling, OpenAI API, Anthropic Claude API, Google Gemini API, Groq, Perplexity API, Claude Code, Cursor IDE, GitHub Copilot, OpenCode

Quality Assurance & Test Automation: pytest, Playwright, Selenium, Selenoid, Allure TestOps, Allure Framework, TestNG, JUnit, BDD, Behave, Cucumber, JMeter, Locust, API testing, UI testing, End-to-End (E2E) testing, contract testing, test data factories, test planning, Spec-Driven Development (SDD), Quality Gates, mutation testing, Test Case Management

Programming Languages: Python, Java, Go (Golang), SQL, JavaScript, TypeScript, C#, Bash, Shell scripting, PLSQL

Data Engineering & Pipelines: Apache Spark, PySpark, Apache Airflow, Apache Kafka, Hive, HDFS, Amazon S3, Vertica, Extract-Transform-Load (ETL), streaming pipelines, data quality

Databases: PostgreSQL, MongoDB, Oracle, ClickHouse, Redis, RabbitMQ, MySQL, SQLite

Backend & APIs: FastAPI, Flask, REST API, GraphQL, gRPC, WebSockets, Microservices, API design, OpenAPI / Swagger

Cloud, DevOps & Infrastructure: Docker, Docker Compose, Kubernetes (k8s), Helm, GitLab CI/CD, Jenkins, GitHub Actions, Amazon Web Services (AWS), Nginx, Linux (Debian, Ubuntu, CentOS), Infrastructure as Code

Observability & Monitoring: ELK Stack (Elasticsearch, Logstash, Kibana), Grafana, Jaeger (distributed tracing), Sentry, Zabbix, TICK Stack, Prometheus, OpenTelemetry

Tools & Platforms: Git, GitHub, GitLab, Jira, Confluence, Atlassian suite, Metabase, SonarQube, Harbor registry, Postman


📈 GitHub stats

Mikhail's GitHub stats Top languages

GitHub streak


🎤 Writing & Speaking

  • April 2026AI in the Tester's Workflow — internal workshop for 47 Quality Assurance engineers at an enterprise telecom company
  • Author of an internal "Must-have AI courses for Testers" learning track (8 curated modules covering LLM basics, prompt engineering, Cursor / Claude Code adoption, AI-assisted test design)

📫 Contact

📄 Resume (PDF) · 🌐 Portfolio · 📧 mshegolev@gmail.com

For recruiters: I respond fastest via email. I am open to full-time remote and contract roles at the Senior / Lead / Staff / Principal level.


Profile views

Pinned Loading

  1. hypi hypi Public

    PLSQL