Use when generating tests for new or existing code to improve coverage - provides a structured workflow for analyzing code, creating tests, and validating coverage goals.
Test Generation
Overview
Generate tests systematically by analyzing code paths, covering edge cases, and validating coverage targets.
When to Use
- Creating tests for new features
- Improving coverage in weak areas
- Building regression or integration test suites
Avoid when:
- The task is only running existing tests (use dev-workflows)
Quick Reference
| Task | Load reference |
|---|---|
| Test generation workflow | skills/test-generation/references/generate-tests.md |
Workflow
- Identify target scope and test type.
- Load the test generation reference.
- Analyze code paths and edge cases.
- Generate tests and validate coverage.
- Summarize results and gaps.
Output
- Generated tests
- Coverage report and follow-ups
Common Mistakes
- Writing tests without understanding code paths
- Ignoring edge cases or failure modes
You Might Also Like
Related Skills

fix
Use when you have lint errors, formatting issues, or before committing code to ensure it passes CI.
facebook
frontend-testing
Generate Vitest + React Testing Library tests for Dify frontend components, hooks, and utilities. Triggers on testing, spec files, coverage, Vitest, RTL, unit tests, integration tests, or write/review test requests.
langgenius
frontend-code-review
Trigger when the user requests a review of frontend files (e.g., `.tsx`, `.ts`, `.js`). Support both pending-change reviews and focused file reviews while applying the checklist rules.
langgenius
code-reviewer
Use this skill to review code. It supports both local changes (staged or working tree) and remote Pull Requests (by ID or URL). It focuses on correctness, maintainability, and adherence to project standards.
google-gemini
session-logs
Search and analyze your own session logs (older/parent conversations) using jq.
moltbot

