All Skills

7109 skills found

Skills List

Vision-language pre-training framework bridging frozen image encoders and LLMs. Use when you need image captioning, visual question answering, image-text retrieval, or multimodal chat with state-of-the-art zero-shot performance.

zechenzhangAGI avatarzechenzhangAGI
Obtenir

State-of-the-art text-to-image generation with Stable Diffusion models via HuggingFace Diffusers. Use when generating images from text prompts, performing image-to-image translation, inpainting, or building custom diffusion pipelines.

zechenzhangAGI avatarzechenzhangAGI
Obtenir
awq-quantization

awq-quantization

877dev-devops

Activation-aware weight quantization for 4-bit LLM compression with 3x speedup and minimal accuracy loss. Use when deploying large models (7B-70B) on limited GPU memory, when you need faster inference than GPTQ with better accuracy preservation, or for instruction-tuned and multimodal models. MLSys 2024 Best Paper Award winner.

zechenzhangAGI avatarzechenzhangAGI
Obtenir
peft-fine-tuning

peft-fine-tuning

877research

Parameter-efficient fine-tuning for LLMs using LoRA, QLoRA, and 25+ methods. Use when fine-tuning large models (7B-70B) with limited GPU memory, when you need to train <1% of parameters with minimal accuracy loss, or for multi-adapter serving. HuggingFace's official library integrated with transformers ecosystem.

zechenzhangAGI avatarzechenzhangAGI
Obtenir
crewai-multi-agent

crewai-multi-agent

877productivity

Multi-agent orchestration framework for autonomous AI collaboration. Use when building teams of specialized agents working together on complex tasks, when you need role-based agent collaboration with memory, or for production workflows requiring sequential/hierarchical execution. Built without LangChain dependencies for lean, fast execution.

zechenzhangAGI avatarzechenzhangAGI
Obtenir

Provides guidance for training and analyzing Sparse Autoencoders (SAEs) using SAELens to decompose neural network activations into interpretable features. Use when discovering interpretable features, analyzing superposition, or studying monosemantic representations in language models.

zechenzhangAGI avatarzechenzhangAGI
Obtenir
phoenix-observability

phoenix-observability

877dev-testing

Open-source AI observability platform for LLM tracing, evaluation, and monitoring. Use when debugging LLM applications with detailed traces, running evaluations on datasets, or monitoring production AI systems with real-time insights.

zechenzhangAGI avatarzechenzhangAGI
Obtenir
hqq-quantization

hqq-quantization

877research

Half-Quadratic Quantization for LLMs without calibration data. Use when quantizing models to 4/3/2-bit precision without needing calibration datasets, for fast quantization workflows, or when deploying with vLLM or HuggingFace Transformers.

zechenzhangAGI avatarzechenzhangAGI
Obtenir

Provides guidance for interpreting and manipulating neural network internals using nnsight with optional NDIF remote execution. Use when needing to run interpretability experiments on massive models (70B+) without local GPU resources, or when working with any PyTorch architecture.

zechenzhangAGI avatarzechenzhangAGI
Obtenir
qdrant-vector-search

qdrant-vector-search

877dev-database

High-performance vector similarity search engine for RAG and semantic search. Use when building production RAG systems requiring fast nearest neighbor search, hybrid search with filtering, or scalable vector storage with Rust-powered performance.

zechenzhangAGI avatarzechenzhangAGI
Obtenir

LLM observability platform for tracing, evaluation, and monitoring. Use when debugging LLM applications, evaluating model outputs against datasets, monitoring production systems, or building systematic testing pipelines for AI applications.

zechenzhangAGI avatarzechenzhangAGI
Obtenir

Evaluates LLMs across 100+ benchmarks from 18+ harnesses (MMLU, HumanEval, GSM8K, safety, VLM) with multi-backend execution. Use when needing scalable evaluation on local Docker, Slurm HPC, or cloud platforms. NVIDIA's enterprise-grade platform with container-first architecture for reproducible benchmarking.

zechenzhangAGI avatarzechenzhangAGI
Obtenir

Multi-cloud orchestration for ML workloads with automatic cost optimization. Use when you need to run training or batch jobs across multiple clouds, leverage spot instances with auto-recovery, or optimize GPU costs across providers.

zechenzhangAGI avatarzechenzhangAGI
Obtenir

Serverless GPU cloud platform for running ML workloads. Use when you need on-demand GPU access without infrastructure management, deploying ML models as APIs, or running batch jobs with automatic scaling.

zechenzhangAGI avatarzechenzhangAGI
Obtenir

Provides guidance for mechanistic interpretability research using TransformerLens to inspect and manipulate transformer internals via HookPoints and activation caching. Use when reverse-engineering model algorithms, studying attention patterns, or performing activation patching experiments.

zechenzhangAGI avatarzechenzhangAGI
Obtenir
prompt-analyzer

prompt-analyzer

873research

提示词分析与洞察 - 查看Prompt详情、对比差异、推荐相似提示词、元素库统计

huangserva avatarhuangserva
Obtenir
prompt-extractor

prompt-extractor

873research

自动化提取AI绘画提示词的模块化结构,从海量提示词中提炼可复用的模块组件

huangserva avatarhuangserva
Obtenir
design-master

design-master

873design

平面设计主控 - 自动生成平面设计提示词,支持海报、logo、插画等多种设计类型

huangserva avatarhuangserva
Obtenir

AI领域分类器 - 智能分析提示词内容,准确判断所属领域(人像/艺术/设计/产品/视频)

huangserva avatarhuangserva
Obtenir

智能提示词生成器 v2.0 - 支持人像/跨domain/设计三种模式,语义理解、常识推理、一致性检查

huangserva avatarhuangserva
Obtenir

通用学习器 - 从任何领域的Prompt中自动提取可复用元素,持续学习和积累知识

huangserva avatarhuangserva
Obtenir
video-master

video-master

873research

视频生成主控 - 自动生成视频场景提示词,支持动态效果、转场、运镜等

huangserva avatarhuangserva
Obtenir
prompt-master

prompt-master

873productivity

提示词主控 - 智能选择合适的领域skill并生成提示词,支持自动领域分类和调度

huangserva avatarhuangserva
Obtenir

产品摄影主控 - 自动生成产品摄影提示词,支持商业拍摄、电商图片等场景

huangserva avatarhuangserva
Obtenir
Want a category view? Try /category/writing.