Skip to content

Hi, I'm

Christoph Lengowski

I combine QA leadership, test management, and requirements engineering with reliable delivery in complex project environments. In parallel, I build AI-assisted workflows and tools that extend this practice technically.

Quality AssuranceTest ManagementRequirements EngineeringTest AutomationAgile Delivery
Christoph Lengowski

What I Bring

Experience at the intersection of requirements, quality engineering, test strategy, and reliable product execution.

QA Leadership & Test Management

Building and steering QA processes in complex projects. Risk-based test strategy, governance, and quality reporting at management level.

Test Automation & CI/CD

Designing automatable test assets, structuring Cucumber/Gherkin scenarios, and working closely with developers on Playwright implementation. Embedded in CI/CD and delivery workflows with Jenkins, Jira/Xray, and Bitbucket.

Requirements Engineering

Requirements analysis, specification work, and translation of fuzzy business input into testable, prioritized, and reviewable delivery artifacts.

AI-Assisted Workflows & Tooling

Own workflows, tools, and prototypes for requirements, analysis, and QA work. I use AI here as an extension of my delivery and quality practice, not as a replacement for it.

Agile Delivery

Turning manual or fuzzy processes into clear delivery flows with release readiness, ownership, and reviewable implementation artifacts.

Certifications

Verified credentials and methodological foundations with a strong focus on test management, requirements, and delivery.

ISTQB logo
2026

ISTQB Test Manager

ISTQB
  • Strategic test planning: defining and steering test strategies plus risk management for complex systems
  • Team leadership and governance: leading test teams, monitoring KPIs, and improving test processes
  • Commercial focus: estimating effort and controlling budgets to maximize QA return on investment
ISTQB logo
2025

ISTQB Foundation Level

ISTQB
  • Standardized methodology: strong command of the fundamental test process and internationally recognized terminology
  • Holistic test design: applying black-box and white-box test design techniques to find defects effectively
  • Quality mindset: ensuring high software quality through early testing activities across the SDLC
Professional Scrum Master I badge
2021

Professional Scrum Master I

Scrum.org

Proof ID: 703391

  • Servant leadership: facilitating Scrum events and removing impediments to maximize team productivity
  • Agile transformation: reinforcing transparency, inspection, and adaptation across the organization
  • Coaching: helping the team self-organize and live the Scrum values and principles
Professional Scrum Product Owner I badge
2024

Professional Scrum Product Owner I

Scrum.org

Proof ID: 991278

  • Business value maximization: prioritizing the product backlog strategically to optimize value
  • Stakeholder management: bridging business requirements and technical implementation effectively
  • Product vision: shaping clear product goals and measurable acceptance criteria
IREB CPRE Foundation Level badge
2021

IREB CPRE Foundation Level

IREB

Proof ID: 21-CPREFL-197026-20

  • Precise requirements analysis: eliciting, documenting, and validating functional and quality requirements professionally
  • Conflict management: moderating between stakeholder interests to avoid misaligned implementation
  • Specification excellence: producing clear, testable requirements for smoother delivery
PRINCE2 wordmark
2021

PRINCE2 Foundation

PRINCE2 / PeopleCert

Proof ID: GR656228305CL

  • Structured project management: understanding process-oriented methods for controlled project delivery
  • Business case focus: continuously validating the business justification throughout the project lifecycle
  • Roles and responsibilities: defining clear structures and escalation paths for efficient project execution
Profile

About Me

I work at the intersection of QA leadership, requirements engineering, quality review, and reliable delivery in complex project environments.

I am a Senior IT Consultant and QA Team Lead with solid experience across test management, requirements engineering, and quality steering. In complex project environments, I take responsibility for test strategy, automatable test design, quality review, and release readiness with a clear view of risk, stakeholders, and execution reality.

My professional core sits in real delivery contexts: QA governance, test steering, defect management, reporting, and translating vague requirements into durable test, automation, and delivery structures.

In parallel, I build my own AI-assisted workflows, requirements and quality workspaces, and technical prototypes. That builder practice extends my profile considerably, but it does not replace my core: dependable quality and delivery in real projects.

Experience
5+ Years Delivery & QA
Role
QA Lead & Team Lead
Foundation
M.A. + Scrum/ISTQB
Builder profile
Builder & Product Practice
Christoph Lengowski

Quick profile

  • QA team leadership and delivery ownership in complex public-sector programs
  • Strong hands-on practice in test management, quality steering, release readiness, and automation handoff
  • Hands-on product engineering for AI-assisted SaaS, workflow, and quality-workspace products
  • Strong bridge between requirements, quality steering, test strategy, and technical execution
  • Focus on systems that stay reliable from prototype to operating reality

What teams value

  • I bring structure into ambiguous problem spaces and surface risks early.
  • I treat quality as part of requirements and delivery, not as a late control step.
  • I use AI deliberately where it concretely improves test, analysis, and delivery work.

Experience

Career milestones that shaped my profile.

12/2023 - Present

QA Team Lead

Materna Information & Communications SE

Key Impact

QA leadership and test management in a large public-sector program with team ownership, governance, test strategy, and close collaboration on automation in delivery-scale workflows.

Leading a QA team with clear test-management responsibility. Ownership for risk-based test strategy, automatable test design, quality steering, and release readiness in a large-scale public sector project.

  • Built and led a QA team
  • Defined a risk-based test strategy and durable test concepts
  • Structured automatable test cases in Jira/Xray and Cucumber for Playwright handoff
  • Worked with developers on Playwright implementation and Jenkins-based CI/CD integration
  • Stakeholder management and quality governance at project leadership level
12/2021 - 12/2023

IT Consultant

Materna Information & Communications SE

Key Impact

Built durable requirements and delivery structures for real-world digitalization projects and supported implementation through close functional alignment.

Consulting and hands-on work in digitalization projects with a clear focus on requirements engineering. Ownership for requirements intake, functional alignment, backlog structuring, and prioritization as the bridge between client, business stakeholders, and development.

  • Requirements intake and functional analysis translated into actionable backlog items
  • Alignment with clients, business stakeholders, and developers across scope, requirements, and priorities
  • Ownership for structuring, maintaining, and prioritizing the product backlog
  • Creation of functional specifications and implementation-ready delivery artifacts
  • Occasional functional testing and review work from a requirements perspective
Q4 2021 - 12/2021

IT Consulting Trainee

Materna Information & Communications SE

Key Impact

Built the foundation for public-sector project work through training, certifications, and a structured understanding of project delivery.

Structured trainee program for entering project delivery environments with a focus on methodological foundations, delivery contexts, and IT consulting practice.

  • Intensive onboarding into project contexts, delivery flows, and consulting practice
  • Comprehensive training in Scrum, requirements engineering, and project methodology
  • Completed key certifications as a methodological foundation
  • Fast transition from the trainee program into operational project work

Projects

Selected projects and workspaces that make my approach to requirements, quality, and execution tangible.

01Key Project

E-Gov Workflow Platform – Large-Scale QA in the Public Sector

QA Lead / Test Manager for a complex e-government platform

QA leadership with a builder profile

Team leadership, test strategy, test management, automation collaboration, release steering, and deliberate use of AI to support test case and test data work.

8

QA team members led functionally

4

System domains covered

CI/CD

Automation handoff embedded in Jenkins and Bitbucket workflows

Public Sector

E-government delivery in a highly regulated environment

A large-scale e-government platform for digital files and process handling in public administration. Within this complex program, I was responsible for planning, steering, and evolving the entire quality assurance setup, raising QA maturity both operationally and methodologically.

My role

QA Lead / Test Manager / IT Consultant

Tech Stack

Playwright · Cucumber / Gherkin · Jenkins · Bitbucket

Challenge

The project ran in a highly complex public-sector environment with multiple clients, backend services, strong traceability requirements, and demanding release expectations. Quality had to be controlled not just through execution, but through risk-based test strategy, defect governance, test steering, and stakeholder reporting.

Solution

I built a structured test organization, led the QA team, and tightly connected test management, automatable test design, and KPI-based reporting with engineering, project leadership, and client stakeholders. I worked with Jira/Xray and Cucumber-based test assets, supported their handoff into Playwright implementation with developers, and embedded the resulting automation in Jenkins- and Bitbucket-supported delivery workflows. In addition, I used AI deliberately through prompt and context engineering to generate, expand, and plausibility-check test cases and test data faster. That turned quality into a controllable delivery capability with clear release readiness instead of a reactive bottleneck shortly before releases.

Project context

  • Further development of a complex e-government platform for digital files and administrative case workflows
  • End-to-end QA responsibility across Web Client, Outlook Client, Admin Client, and backend services
  • Delivery in a public-sector environment with high expectations around quality, security, and auditability

Project scope

  • Functional leadership of a QA team of up to 8 people
  • Setup and steering of the full test management process
  • Creation of test concepts and test strategies
  • Structuring automatable test cases in Jira/Xray and Cucumber for developer handoff
  • Planning, prioritization, and execution of release and regression testing
  • Ownership of the defect management process
  • Support for government-side test activities plus workshops and training sessions

Impact

  • Built a structured test organization inside a large e-government program
  • Built a risk-based test strategy across multiple system domains
  • Introduced and expanded Playwright- and Cucumber-based automation in close collaboration with development
  • Established defect governance and KPI-based quality reporting
  • Supported integration of automated tests into Jenkins and Bitbucket delivery workflows
  • Used AI deliberately for test case and test data work through prompting and context engineering
  • Owned release and regression steering in a complex public-sector environment
  • Improved release stability through systematic quality steering
  • Established test KPIs and reporting for leadership and stakeholders

Tech Stack

PlaywrightCucumber / GherkinJenkinsBitbucketJiraXrayConfluence.NET / C#SQL ServerWebservicesSharePointOutlook Add-in
02Own Project

Requirements & Quality Workspace

AI-assisted workspace for discovery, quality review, and test strategy

Tech Stack

Next.js 16 · React 19 · TypeScript

A full-stack product for product owners, business teams, and delivery setups that turns vague ideas into reviewable requirements, quality analysis, traceability links, and test-oriented artifacts. After merging the former Teststrategy Generator into the same product, the workspace now covers structured discovery, requirements critique, test-strategy workflows, Jira test case export, and shareable PDF artifacts inside a production-oriented Next.js architecture.

Challenge

The friction between business context and implementation is rarely about lack of expertise. Stakeholders know the problem, but often express requirements too vaguely or too inconsistently, which costs teams time, scope clarity, and quality.

Solution

I built a guided three-phase interview flow with context-aware follow-up questions and AI-assisted consolidation, then extended it into a reviewable quality workspace. The system now produces prioritized user stories, acceptance criteria, NFRs, and open questions, while also critiquing requirement quality, building traceability, generating test-strategy artifacts, and exporting test-case-ready outputs into Jira-aligned delivery flows.

  • Next.js 16 app with React 19, TypeScript, and Tailwind CSS v4
  • Guided three-phase interview flow instead of blank requirement forms
  • Claude-generated summaries and critique before final generation
  • Artifact bundle with requirements, epics, features, acceptance criteria, and NFRs
  • Traceability, quality-gate, and test-strategy workspace in one product
  • Jira test case export for Zephyr/Xray-oriented delivery flows
  • PDF export for directly shareable requirements and review artifacts
System Architecture

System Flow

Stakeholder Input
Guided Interview
AI Summaries & Critique
Structured Parsing
Requirements / Traceability / Test Strategy
Jira / PDF Export
Quality Gate Review
Delivery Handoff

Core Components

  • Guided interview and review engine with a multi-step flow
  • Claude-based summary and critique layer
  • Parser and structuring logic for requirements, traceability, and test artifacts
  • Integration and export layer for Jira test cases and shareable PDFs

Hard Decisions

  • Use a guided flow instead of an open free-text form
  • Add critique-before-write to improve requirement quality
  • Produce structured outputs rather than generic AI prose
  • Merge requirements and test strategy into one workspace

Guardrails

  • Mark open questions and ambiguity explicitly
  • Use reproducible parsing logic instead of unstructured free text
  • Score quality and testability before export and handoff
  • Apply context-aware follow-up questions to prevent premature specification

Learnings

Better requirements work does not stop at writing. The real leverage appears when discovery, quality review, traceability, and test strategy run in one workflow instead of being split across separate tools.

Tech Stack

Next.js 16React 19TypeScriptClaude APIRequirements AnalysisTraceabilityTest Strategy WorkspaceJira Test Case ExportPDF Exportshadcn/uiTailwind CSS v4
03Own ProjectVisit Website

Nutrikompass

AI-powered SaaS platform for therapeutic nutrition planning

Tech Stack

Next.js · TypeScript · tRPC

Nutrikompass supports therapeutic residential groups in structured nutrition planning for residents with eating disorders. The platform combines clinical expertise with AI automation: a RAG-based knowledge base, an automated LLM evaluation framework with 5 clinical benchmarks, and two-stage prompt injection detection for safe use in care settings.

Challenge

Therapeutic residential groups face a recurring problem: nutrition planning for residents with eating disorders is time-consuming, error-prone, and barely standardized. AI support carries specific risks — from hallucinations to prompt injection — that are intolerable in clinical contexts.

Solution

A specialized SaaS platform with a multi-layer security architecture: RAG with pgvector for fact-grounded responses, LLM-as-a-Judge for automated quality scoring against clinical benchmarks, two-stage injection detection, and a full Playwright E2E test suite in the CI/CD pipeline.

  • Multi-tenant SaaS architecture with Next.js and Prisma
  • RAG knowledge base with pgvector and OpenAI embeddings
  • AI Evaluation Framework: 5 clinical benchmarks, LLM-as-a-Judge
  • Two-stage prompt injection detection for clinical safety
  • Playwright E2E tests + GitHub Actions CI/CD pipeline
  • Stripe subscription billing, NextAuth role management
System Architecture

System Flow

User Input
Validation
RAG Retrieval
LLM Generation
Guardrails
Export

Core Components

  • Next.js application with tRPC, Prisma, and Supabase
  • RAG layer with pgvector and clinical knowledge context
  • Security and evaluation layer for AI output quality
  • PDF, billing, audit, and FHIR export modules

Hard Decisions

  • Treat RAG, security, and evaluation as core architecture, not add-ons
  • Design roles, sessions, and auditability server-side from the start
  • Prepare interoperability and operational interfaces early

Guardrails

  • Prompt-injection detection and output sanitization
  • Clinical benchmarks and LLM-as-a-Judge before productive usage
  • Privacy and access controls for sensitive user data

Learnings

Security in AI systems isn't a feature — it's architecture. In clinical contexts, every AI output must be traceable and verifiable. This fundamentally shaped my understanding of responsible AI integration.

Tech Stack

Next.jsTypeScripttRPCPrismaSupabasepgvectorNextAuthStripeOpenAI APIFHIR ExportPlaywrightNetlify
04Own Project

AI QA Release Gates

Release and quality gates for LLM systems with requirements traceability

Tech Stack

Playwright · Python · Pytest

A Python-based open-source framework for production-oriented quality assurance of LLM applications. The focus is not just isolated tests, but a reliable QA workflow with 46 automated checks, traceability to quality requirements, RAG evaluation, multi-model support, and CI-backed release gates for AI features.

Challenge

LLM systems rarely fail because of a single bug. They fail through harder-to-control quality risks: non-deterministic responses, poor traceability, hallucinations, bias, prompt injection, and regression-prone UI flows. Classical assertions are not enough for that.

Solution

I built a requirements-driven testing framework across 7 quality dimensions: security, consistency, hallucination, performance, bias, RAG, and UI. It combines semantic evaluation, multi-provider tests for Claude, GPT, and Gemini, generic Playwright checks for chatbot interfaces, HTML reporting, and GitHub-Action-based quality gates before release.

  • 46 automated tests across 7 quality dimensions for LLM risk areas
  • Requirements-driven QA with traceability and explicit release gates
  • Multi-model support for Claude, OpenAI GPT, and Google Gemini
  • 8 RAG tests covering grounding, faithfulness, and contradictions
  • 17 generic Playwright UI tests for chatbot interfaces
  • HTML reports and dashboard views for trends and regressions
System Architecture

System Flow

Requirements
Test Dimensions
Model / RAG / UI Checks
Evaluation
Report
Release Gate

Core Components

  • Python test harness and multi-model adapters
  • RAG and chatbot UI checks with Playwright
  • Semantic evaluation and HTML reporting
  • CI-backed quality gates for releases

Hard Decisions

  • Use requirements-to-test traceability instead of isolated prompts
  • Cover multiple quality dimensions, not only answer assertions
  • Tie release approval to measurable criteria

Guardrails

  • Thresholds for security, bias, RAG, and UI quality
  • Multi-provider checks against model-specific blind spots
  • Trend and regression views through reports instead of isolated findings

Learnings

AI quality becomes manageable only when testing, requirements, and release decisions are connected. Model quality alone is not enough; what matters is the discipline to make risks measurable and releasable.

Tech Stack

PlaywrightPythonPytestClaude APIOpenAI APIGemini APIGitHub ActionsChart.js
05Project

Xray Quality Dashboard

Quality and go-live dashboard for Jira/Xray with KPI-based readiness scoring

Tech Stack

Next.js 14 · TypeScript · Prisma

A production-ready full-stack dashboard that translates Jira and Xray data into transparent quality KPIs, coverage views, and go-live signals. The project combines demo mode, adapter-based integration, and a KPI engine into a tool that makes test status visible at both delivery and stakeholder level.

Challenge

Many teams have test cases, defects, and execution data in Jira/Xray, but no compact view of how release-ready a product really is. Raw data alone does not help delivery teams or stakeholders make defensible decisions.

Solution

I built a Next.js application with Prisma, PostgreSQL, NextAuth, and an adapter pattern for mock and Jira/Xray sources. A KPI engine calculates coverage, execution progress, defect pressure, and go-live readiness transparently, including drill-downs and realistic demo data for fast evaluation.

  • 0 to 100 readiness score with green-amber-red tiers
  • Coverage tracking across requirements, test cases, and execution status
  • Defect pressure and flaky-test visibility for realistic quality steering
  • Demo mode with realistic mock data and no Jira/Xray dependency
  • Adapter architecture for mock, Jira Cloud, and server/Xray setups
  • Recharts dashboards plus persisted KPI snapshots in PostgreSQL
System Architecture

System Flow

Jira/Xray Sync
Normalization
KPI Engine
Readiness Scoring
Dashboard Views

Core Components

  • Next.js app with Prisma, PostgreSQL, and NextAuth
  • Adapter layer for mock, Jira, and Xray integrations
  • KPI engine for coverage, progress, and defect pressure
  • Chart and dashboard layer with persisted snapshots

Hard Decisions

  • Design a demo mode for showcase and early evaluation
  • Keep KPI calculations deterministic and explainable
  • Encapsulate integration logic behind adapters instead of hardcoding

Guardrails

  • Base readiness on multiple KPIs instead of one metric
  • Run mock and live data through the same structures
  • Provide drill-downs for auditability and root-cause analysis

Learnings

Quality becomes manageable for stakeholders only when test data is translated into understandable product signals. A strong dashboard therefore does more than count cases. It makes release risk visible.

Tech Stack

Next.js 14TypeScriptPrismaPostgreSQLNextAuthRechartsJira / XrayKPI ScoringDemo Mode
06Project

Xray Automation Bridge

From manual Xray test cases to reviewable automation assets

Tech Stack

Next.js 16 · TypeScript · Tailwind CSS 4

A consultant-grade internal tool that transforms manual Xray test cases into normalized, reviewable automation assets. Instead of overstating code generation, the project produces Gherkin, Playwright skeletons, suitability assessments, and explicit notes on blockers, weak test design, and traceability.

Challenge

Many teams have Xray test cases but no reliable bridge to automation. The real problem is rarely missing code generation. It is weak step quality, unclear expected results, missing setup data, and poor traceability between test management and automation engineering.

Solution

I built a Next.js application with a shared import abstraction for mock and live Jira/Xray data, a normalization pipeline, deterministic automation-readiness assessment, and export flows for Gherkin, Playwright, Markdown, and JSON. The product is intentionally honest about what is automation-ready and what still needs human test design work.

  • Import flow for demo or Jira/Xray test cases through one consistent pipeline
  • Automation suitability scoring with transparent heuristics, warnings, and blockers
  • Generation of Gherkin features and Playwright TypeScript skeletons
  • Explicit avoidance of fabricated selectors, URLs, and test data
  • Prisma and PostgreSQL model for traceability, runs, and generated artifacts
  • Export of .feature, .ts, .md, and .json artifacts for engineering handoff
System Architecture

System Flow

Jira / Xray Import
Normalization
Suitability Assessment
Gherkin / Playwright
Review
Export

Core Components

  • Import abstraction for mock and live Jira/Xray data
  • Normalization pipeline for steps, preconditions, and metadata
  • Heuristic suitability assessment with warnings and blockers
  • Export and persistence layer for artifacts and traceability

Hard Decisions

  • Do not generate fake runnable tests with invented selectors
  • Use deterministic heuristics instead of uncontrolled AI output
  • Keep source traceability as a core generation principle

Guardrails

  • Surface blockers, ambiguity, and missing expected results explicitly
  • Preserve raw payloads and external IDs for review and auditability
  • Generate conservative skeletons instead of overstated automation promises

Learnings

Good test automation does not start with writing code. It starts with the quality of the source test. The real leverage is making ambiguity, missing assertions, and setup risks visible before a team builds expensive pseudo-automation.

Tech Stack

Next.js 16TypeScriptTailwind CSS 4PrismaPostgreSQLJira / XrayPlaywrightGherkinZod
07Project

Claude Code QA Skills

Three specialized skills for test concepts, E2E test execution, and coverage analysis

Tech Stack

Claude Code Skills · Prompt Engineering · Playwright

A suite of three Claude Code skills that automate core QA tasks: a test concept skill for ISTQB-aligned documentation, a run-E2E skill for structured Playwright test execution with failure analysis, and an expand-E2E skill for coverage gap analysis and automated test creation. Together they cover the full lifecycle from test strategy through execution to coverage optimization.

Challenge

QA teams face recurring tasks: test concepts are often incomplete, E2E tests run without structured analysis, and coverage gaps are discovered too late. Individual tools only solve parts of the problem.

Solution

I built three specialized skills that complement each other: the test concept skill generates ISTQB-compliant documents with intake logic and compliance checks. The run-E2E skill executes Playwright tests, categorizes failures by type, and provides root cause analysis. The expand-E2E skill analyzes existing tests, identifies coverage gaps by priority, and automatically creates new tests.

  • Test Concept Skill: ISTQB-compliant documentation with traffic-light scoring
  • Run E2E Skill: Structured test execution with failure categorization
  • Expand E2E Skill: Coverage gap analysis and automated test creation
  • Failure categories: product bug, test bug, auth issue, environment issue, flaky test
  • Prioritized gaps: P0 (critical) through P3 (nice-to-have)
  • All skills follow existing project conventions and patterns
System Architecture

System Flow

Project Context Intake
Config Detection
Test Execution / Gap Analysis
Failure Categorization
Coverage Map Update

Core Components

  • Test concept engine with ISTQB mapping and compliance checks
  • Run engine with Playwright integration and failure analysis
  • Expand engine with coverage inventory and test generation
  • Structured reports with prioritized recommendations

Hard Decisions

  • Three specialized skills instead of one generic monolith
  • Failure categorization over generic error reporting
  • Incremental coverage expansion instead of exhaustive generation

Guardrails

  • No production code changes from E2E skills without explicit request
  • Coverage gaps prioritized rather than testing everything at once
  • Respect existing project patterns and conventions

Learnings

Skills are the most effective way to operationalize QA knowledge. Instead of writing one-off prompts, you create reproducible workflows that scale expertise and reduce quality variance.

Tech Stack

Claude Code SkillsPrompt EngineeringPlaywrightISTQBQA GovernanceE2E TestingCoverage AnalysisDOCX Workflows
08Project

Feierabendtrader

A local analysis tool for swing-trading setups with Yahoo Finance, Claude Haiku, and TradingView

Tech Stack

Next.js 15 · React 19 · TypeScript

A local Next.js app that delivers 5–10 justified swing-trading breakout setups from the US stock market at the push of a button. More than anything, this project shows my builder profile: a lean analysis tool with free market-data screening, AI-assisted prioritization via Claude Haiku, and TradingView charts — without auth, database, or hosting dependency.

Challenge

Manual stock screening is time-consuming and subjective. Existing tools are often expensive, subscription-based, or provide no explainable logic behind their setups.

Solution

I built a multi-stage screening funnel: Yahoo Finance filters ~80 candidates down to ~20, Claude Haiku analyzes these and returns the best 5–10 breakout setups with entry zone, stop-loss, risk/reward ratio, and justification. Everything runs locally, with no infrastructure overhead.

  • Next.js 15 app with React 19, TypeScript, and Tailwind CSS 4
  • Multi-stage screening funnel via Yahoo Finance (free, ~80→20 candidates)
  • Claude Haiku analysis for 5–10 prioritized breakout setups with reasoning
  • Setup cards with breakout level, entry zone, stop-loss, risk/reward, and TradingView chart
  • Market hours badge, localStorage cache (24h), and funnel metrics
  • Zod validation for type-safe API responses
System Architecture

System Flow

Market Data Fetch
Screening Funnel
AI Prioritization
Setup Scoring
TradingView Visualization

Core Components

  • Yahoo Finance data layer and local cache
  • Multi-stage screening funnel for candidate reduction
  • Claude Haiku analysis for prioritized setups
  • Setup cards and TradingView visualization

Hard Decisions

  • Keep the tool local instead of building cloud-heavy infrastructure
  • Pre-filter candidates before invoking AI
  • Prefer explainable setups over black-box output

Guardrails

  • Use explicit funnel stages to control candidate volume
  • Apply structured criteria for entry, stop, and risk/reward
  • Use cache and market status for reproducible behavior

Learnings

AI-powered financial tools don’t have to be expensive. Targeted prompting on a solid data foundation is enough to turn ~80 raw candidates into qualified, explained setups in seconds — without any cloud dependency.

Tech Stack

Next.js 15React 19TypeScriptTailwind CSS 4Claude HaikuAnthropic APIyahoo-finance2TradingView WidgetZod

Reusable Assets & Accelerators

Reusable workflows and delivery assets that help teams move from vague requests to reviewable outcomes faster.

Reusable AssetLLM QARelease GatesPlaywright

AI QA Release Gates

A reusable QA framework for AI features with traceability, multi-model tests, and explicit release criteria before shipping.

Best suited for

Teams that want to move AI features beyond prototyping and qualify them as measurable, reliable delivery components.

Typical deliverables

  • Test suite for security, bias, RAG, performance, and UI
  • Requirements-to-test traceability
  • HTML reports and release gate logic
Reusable AssetRequirementsInterview FlowPDF Output

Requirements & Quality Workspace

A combined discovery and quality workspace that turns vague ideas into reviewable requirements, traceability, test strategy, and exportable delivery artifacts.

Best suited for

Product owners, business teams, and delivery setups that need to turn unclear requests into testable, reviewable implementation artifacts faster.

Typical deliverables

  • Guided discovery flow with analysis and quality review
  • Structured requirements, traceability, and test strategy artifacts
  • Jira and PDF exports for alignment, review, and delivery kickoff
Reusable AssetClaude Code SkillsE2E TestingISTQBQA Automation

Claude Code QA Skills

Three skills for ISTQB test concepts, Playwright execution with failure analysis, and structured coverage optimization in AI-assisted delivery setups.

Best suited for

QA teams and projects that want to automate test documentation, execution, and coverage analysis as a repeatable delivery workflow.

Typical deliverables

  • ISTQB-compliant test concepts with intake and compliance checks
  • Structured E2E test execution with failure categorization
  • Coverage gap analysis with automated test creation

Capabilities & Technologies

My mix of professionally applied QA, requirements, and delivery practice plus my own AI and builder capabilities.

QA Leadership & Quality Engineering

Professionally applied QA leadership for complex delivery contexts: from test management and governance to traceability, reporting, and dependable release gates.

QA LeadershipTest ManagementRelease GatesRisk-Based TestingDefect GovernanceQuality ReportingStakeholder SteeringSecurity TestingTraceability

Test Automation & Tooling

Operational practice in automatable test design, Cucumber/Gherkin structuring, Playwright collaboration, APIs, and technical workflows for reproducible quality work.

PlaywrightCucumber / GherkinAPI TestingTest Data DesignTool IntegrationNext.js 16TypeScripttRPC / APIsWorkflow Prototypes

Delivery & Release Governance

Release-oriented delivery with Jenkins, Bitbucket, Jira/Xray, CI/CD coordination, workflow automation, and operational hardening in day-to-day project work.

JenkinsBitbucketGitHub ActionsJira / XrayConfluenceCI/CDWorkflow AutomationReview LoopsRelease ReadinessOperational Hardening

Requirements & Builder Practice

Requirements engineering, stakeholder guidance, and complementary builder practice with AI-assisted workflows, prompting, and own product prototypes.

Requirements EngineeringSpecification FacilitationStakeholder FacilitationAgile DeliveryAI-Assisted RequirementsPrompt EngineeringAgentic WorkflowsProduct ThinkingConsulting LeadershipStrategic Automation

How I Work

Principles I use to make quality, requirements, and delivery dependable.

Quality as Mindset

Quality is not a process step, but a way of thinking. I integrate validation, guardrails, and review loops into every phase, from requirements to release.

Product Thinking

I think in user problems, workflows, and operating reality, not isolated features. Every technical decision must serve the product.

Ownership

I take responsibility for outcomes, not just tasks. If a prototype is not yet viable, I make the path to dependable implementation explicit.

Contact

Interested in working together? I'd love to hear from you.