LLM Agent Playbooks

Agent Skills Hub — Equipping LLMs for the Real World

智能体技能中心 · 赋予 LLM 解决实际问题的能力

Discover reusable tools, schema patterns, and evaluation checklists that help LLM agents call external services safely and reliably.

Standard tool invocation flow
LLM Prompt & Context
Agent Skill Handler
External API / Service

Requests flow from the model into a typed tool, reach a trusted API, then return structured answers to the agent loop.

Agent Skill Playbooks

Reproduce the Claude quickstart scenarios with ready-to-launch Next.js handlers and Anthropic-compatible schemas, rendered server-side for instant indexing.

Next.js Agent Skill Development

Follow this baseline to keep every skill type-safe, testable, and compatible with OpenAI Function Calling and Claude Tooling APIs.

  1. 01

    Model the contract

    Define strict input and output interfaces so the LLM knows exactly what data it must supply and what it can expect back.

  2. 02

    Describe the function schema

    Ship a JSON schema with enums, defaults, and clear descriptions so tool selection stays deterministic.

  3. 03

    Execute inside an API route

    Use Next.js Route Handlers to validate payloads, call downstream services, and return normalized JSON.

  4. 04

    Log, observe, and iterate

    Emit metrics, add retries, and capture traces so you can improve completion quality over time.

Social Proof & Updates

Want to be featured here? Share your launch and we’ll surface the best builds for the community.

Drop the link to your Twitter thread or landing page. We’ll review each submission before highlighting it on the hub.

Agent Skills FAQ

What are agent skills?

They are modular tools that let LLM agents call external systems — inspired by Anthropic's Claude Skills and focused on solving real tasks beyond pure text generation.

How does Next.js help with agent skills?

Next.js route handlers provide a secure execution surface for tools, while the front end showcases documentation, demos, and contribution workflows.

How is this different from OpenAI Function Calling?

The pattern is similar: you expose structured functions that models can invoke. Providers differ in schema fields and invocation wrappers, so we document compatible choices for both.