
ai-model-wechat
Use this skill when developing WeChat Mini Programs (小程序, 企业微信小程序, wx.cloud-based apps) that need AI capabilities. Features text generation (generateText) and streaming (streamText) with callback support (onText, onEvent, onFinish) via wx.cloud.extend.AI. Built-in models include Hunyuan (hunyuan-2.0-instruct-20251111 recommended) and DeepSeek (deepseek-v3.2 recommended). API differs from JS/Node SDK - streamText requires data wrapper, generateText returns raw response. NOT for browser/Web apps (use ai-model-web), Node.js backend (use ai-model-nodejs), or image generation (not supported).
Use this skill when developing WeChat Mini Programs (小程序, 企业微信小程序, wx.cloud-based apps) that need AI capabilities. Features text generation (generateText) and streaming (streamText) with callback support (onText, onEvent, onFinish) via wx.cloud.extend.AI. Built-in models include Hunyuan (hunyuan-2.0-instruct-20251111 recommended) and DeepSeek (deepseek-v3.2 recommended). API differs from JS/Node SDK - streamText requires data wrapper, generateText returns raw response. NOT for browser/Web apps (use ai-model-web), Node.js backend (use ai-model-nodejs), or image generation (not supported).
When to use this skill
Use this skill for calling AI models in WeChat Mini Program using wx.cloud.extend.AI.
Use it when you need to:
- Integrate AI text generation in a Mini Program
- Stream AI responses with callback support
- Call Hunyuan models from WeChat environment
Do NOT use for:
- Browser/Web apps → use
ai-model-webskill - Node.js backend or cloud functions → use
ai-model-nodejsskill - Image generation → use
ai-model-nodejsskill (not available in Mini Program) - HTTP API integration → use
http-apiskill
Available Providers and Models
CloudBase provides these built-in providers and models:
| Provider | Models | Recommended |
|---|---|---|
hunyuan-exp |
hunyuan-turbos-latest, hunyuan-t1-latest, hunyuan-2.0-thinking-20251109, hunyuan-2.0-instruct-20251111 |
✅ hunyuan-2.0-instruct-20251111 |
deepseek |
deepseek-r1-0528, deepseek-v3-0324, deepseek-v3.2 |
✅ deepseek-v3.2 |
Prerequisites
- WeChat base library 3.7.1+
- No extra SDK installation needed
Initialization
// app.js
App({
onLaunch: function() {
wx.cloud.init({ env: "<YOUR_ENV_ID>" });
}
})
generateText() - Non-streaming
⚠️ Different from JS/Node SDK: Return value is raw model response.
const model = wx.cloud.extend.AI.createModel("hunyuan-exp");
const res = await model.generateText({
model: "hunyuan-2.0-instruct-20251111", // Recommended model
messages: [{ role: "user", content: "你好" }],
});
// ⚠️ Return value is RAW model response, NOT wrapped like JS/Node SDK
console.log(res.choices[0].message.content); // Access via choices array
console.log(res.usage); // Token usage
streamText() - Streaming
⚠️ Different from JS/Node SDK: Must wrap parameters in data object, supports callbacks.
const model = wx.cloud.extend.AI.createModel("hunyuan-exp");
// ⚠️ Parameters MUST be wrapped in `data` object
const res = await model.streamText({
data: { // ⚠️ Required wrapper
model: "hunyuan-2.0-instruct-20251111", // Recommended model
messages: [{ role: "user", content: "hi" }]
},
onText: (text) => { // Optional: incremental text callback
console.log("New text:", text);
},
onEvent: ({ data }) => { // Optional: raw event callback
console.log("Event:", data);
},
onFinish: (fullText) => { // Optional: completion callback
console.log("Done:", fullText);
}
});
// Async iteration also available
for await (let str of res.textStream) {
console.log(str);
}
// Check for completion with eventStream
for await (let event of res.eventStream) {
console.log(event);
if (event.data === "[DONE]") { // ⚠️ Check for [DONE] to stop
break;
}
}
API Comparison: JS/Node SDK vs WeChat Mini Program
| Feature | JS/Node SDK | WeChat Mini Program |
|---|---|---|
| Namespace | app.ai() |
wx.cloud.extend.AI |
| generateText params | Direct object | Direct object |
| generateText return | { text, usage, messages } |
Raw: { choices, usage } |
| streamText params | Direct object | ⚠️ Wrapped in data: {...} |
| streamText return | { textStream, dataStream } |
{ textStream, eventStream } |
| Callbacks | Not supported | onText, onEvent, onFinish |
| Image generation | Node SDK only | Not available |
Type Definitions
streamText() Input
interface WxStreamTextInput {
data: { // ⚠️ Required wrapper object
model: string;
messages: Array<{
role: "user" | "system" | "assistant";
content: string;
}>;
};
onText?: (text: string) => void; // Incremental text callback
onEvent?: (prop: { data: string }) => void; // Raw event callback
onFinish?: (text: string) => void; // Completion callback
}
streamText() Return
interface WxStreamTextResult {
textStream: AsyncIterable<string>; // Incremental text stream
eventStream: AsyncIterable<{ // Raw event stream
event?: unknown;
id?: unknown;
data: string; // "[DONE]" when complete
}>;
}
generateText() Return
// Raw model response (OpenAI-compatible format)
interface WxGenerateTextResponse {
id: string;
object: "chat.completion";
created: number;
model: string;
choices: Array<{
index: number;
message: {
role: "assistant";
content: string;
};
finish_reason: string;
}>;
usage: {
prompt_tokens: number;
completion_tokens: number;
total_tokens: number;
};
}
Best Practices
- Check base library version - Ensure 3.7.1+ for AI support
- Use callbacks for UI updates -
onTextis great for real-time display - Check for [DONE] - When using
eventStream, checkevent.data === "[DONE]"to stop - Handle errors gracefully - Wrap AI calls in try/catch
- Remember the
datawrapper - streamText params must be wrapped indata: {...}
You Might Also Like
Related Skills

coding-agent
Run Codex CLI, Claude Code, OpenCode, or Pi Coding Agent via background process for programmatic control.
openclaw
add-uint-support
Add unsigned integer (uint) type support to PyTorch operators by updating AT_DISPATCH macros. Use when adding support for uint16, uint32, uint64 types to operators, kernels, or when user mentions enabling unsigned types, barebones unsigned types, or uint support.
pytorch
at-dispatch-v2
Convert PyTorch AT_DISPATCH macros to AT_DISPATCH_V2 format in ATen C++ code. Use when porting AT_DISPATCH_ALL_TYPES_AND*, AT_DISPATCH_FLOATING_TYPES*, or other dispatch macros to the new v2 API. For ATen kernel files, CUDA kernels, and native operator implementations.
pytorch
skill-writer
Guide users through creating Agent Skills for Claude Code. Use when the user wants to create, write, author, or design a new Skill, or needs help with SKILL.md files, frontmatter, or skill structure.
pytorch
implementing-jsc-classes-cpp
Implements JavaScript classes in C++ using JavaScriptCore. Use when creating new JS classes with C++ bindings, prototypes, or constructors.
oven-sh
implementing-jsc-classes-zig
Creates JavaScript classes using Bun's Zig bindings generator (.classes.ts). Use when implementing new JS APIs in Zig with JSC integration.
oven-sh