Test

Test, hata ayıklama

580 skills available

Skills List

wcag-audit-patterns

wcag-audit-patterns

26Kdev-testing

Conduct WCAG 2.2 accessibility audits with automated testing, manual verification, and remediation guidance. Use when auditing websites for accessibility, fixing WCAG violations, or implementing accessible design patterns.

wshobson avatarwshobson
Al
web3-testing

web3-testing

26Kdev-testing

Test smart contracts comprehensively using Hardhat and Foundry with unit tests, integration tests, and mainnet forking. Use when testing Solidity contracts, setting up blockchain test suites, or validating DeFi protocols.

wshobson avatarwshobson
Al
screen-reader-testing

screen-reader-testing

26Kdev-testing

Test web applications with screen readers including VoiceOver, NVDA, and JAWS. Use when validating screen reader compatibility, debugging accessibility issues, or ensuring assistive technology support.

wshobson avatarwshobson
Al

Test Temporal workflows with pytest, time-skipping, and mocking strategies. Covers unit testing, integration testing, replay testing, and local development setup. Use when implementing Temporal workflow tests or debugging test failures.

wshobson avatarwshobson
Al

Implement data quality validation with Great Expectations, dbt tests, and data contracts. Use when building data quality pipelines, implementing validation rules, or establishing data contracts.

wshobson avatarwshobson
Al
bats-testing-patterns

bats-testing-patterns

26Kdev-testing

Master Bash Automated Testing System (Bats) for comprehensive shell script testing. Use when writing tests for shell scripts, CI/CD pipelines, or requiring test-driven development of shell utilities.

wshobson avatarwshobson
Al

Master ShellCheck static analysis configuration and usage for shell script quality. Use when setting up linting infrastructure, fixing code issues, or ensuring script portability.

wshobson avatarwshobson
Al
llm-evaluation

llm-evaluation

26Kdev-testing

Implement comprehensive evaluation strategies for LLM applications using automated metrics, human feedback, and benchmarking. Use when testing LLM performance, measuring AI application quality, or establishing evaluation frameworks.

wshobson avatarwshobson
Al
debugging-strategies

debugging-strategies

26Kdev-testing

Master systematic debugging techniques, profiling tools, and root cause analysis to efficiently track down bugs across any codebase or technology stack. Use when investigating bugs, performance issues, or unexpected behavior.

wshobson avatarwshobson
Al
e2e-testing-patterns

e2e-testing-patterns

26Kdev-testing

Master end-to-end testing with Playwright and Cypress to build reliable test suites that catch bugs, improve confidence, and enable fast deployment. Use when implementing E2E tests, debugging flaky tests, or establishing testing standards.

wshobson avatarwshobson
Al

Implement comprehensive testing strategies using Jest, Vitest, and Testing Library for unit tests, integration tests, and end-to-end testing with mocking, fixtures, and test-driven development. Use when writing JavaScript/TypeScript tests, setting up test infrastructure, or implementing TDD/BDD workflows.

wshobson avatarwshobson
Al

Master effective code review practices to provide constructive feedback, catch bugs early, and foster knowledge sharing while maintaining team morale. Use when reviewing pull requests, establishing review standards, or mentoring developers.

wshobson avatarwshobson
Al
web-renderer-test

web-renderer-test

26Kdev-testing

Add a test case to the web renderer

remotion-dev avatarremotion-dev
Al
langsmith-fetch

langsmith-fetch

23Kdev-testing

Debug LangChain and LangGraph agents by fetching execution traces from LangSmith Studio. Use when debugging agent behavior, investigating errors, analyzing tool calls, checking memory operations, or examining agent performance. Automatically fetches recent traces and analyzes execution patterns. Requires langsmith-fetch CLI installed.

ComposioHQ avatarComposioHQ
Al
reviewing-code

reviewing-code

22Kdev-testing

Review code for quality, maintainability, and correctness. Use when reviewing pull requests, evaluating code changes, or providing feedback on implementations. Focuses on API design, patterns, and actionable feedback.

jlowin avatarjlowin
Al
testing-python

testing-python

22Kdev-testing

Write and evaluate effective Python tests using pytest. Use when writing tests, reviewing test code, debugging test failures, or improving test coverage. Covers test design, fixtures, parameterization, mocking, and async testing.

jlowin avatarjlowin
Al
lint-fixer

lint-fixer

21Kdev-testing

Use this skill to run npm run lint and fix linting issues. Triggers on fixing lint errors after code changes or validating code against project style guidelines.

yamadashy avataryamadashy
Al

List the contents of an npm package tarball before publishing. Use when the user wants to see what files are included in an npm bundle, verify package contents, or debug npm publish issues.

vercel avatarvercel
Al

Capture API response test fixture.

vercel avatarvercel
Al
smoke-test

smoke-test

20Kdev-testing

Create a Mastra project using create-mastra and smoke test the studio in Chrome

mastra-ai avatarmastra-ai
Al

E2E validation workflow for frontend changes in playground packages using Playwright MCP

mastra-ai avatarmastra-ai
Al
iterate-pr

iterate-pr

18Kdev-testing

Iterate on a PR until CI passes. Use when you need to fix CI failures, address review feedback, or continuously push fixes until all checks are green. Automates the feedback-fix-push-wait cycle.

davila7 avatardavila7
Al
move-code-quality

move-code-quality

18Kdev-testing

Analyzes Move language packages against the official Move Book Code Quality Checklist. Use this skill when reviewing Move code, checking Move 2024 Edition compliance, or analyzing Move packages for best practices. Activates automatically when working with .move files or Move.toml manifests.

davila7 avatardavila7
Al
phoenix-observability

phoenix-observability

18Kdev-testing

Open-source AI observability platform for LLM tracing, evaluation, and monitoring. Use when debugging LLM applications with detailed traces, running evaluations on datasets, or monitoring production AI systems with real-time insights.

davila7 avatardavila7
Al