02
LLM Wiki AIU document page
LLM Wiki / 2026-04-12

LLM Wiki

A fast bilingual reference for how large language model work is structured in practice.

This page collects the core layers people usually need when working with LLM systems: model behavior, prompt structure, context setup, tool use, and repeatable agent workflow patterns. It is meant to be skimmed first and revisited as a working reference.

Models Prompt Context Tools Agents
Focus

Model Prompt Context

Good LLM outcomes rarely come from one clever sentence. They come from an aligned system that gives the model the right task, enough grounding, the right tools, and a clear operating frame.

Model Basics Prompt Design Context Framing
Workflow

Tools / Agents / Review

The wiki frames practical execution too: let the model reason, let tools reach the environment, and let review loops catch weak assumptions before they become production mistakes.

01 / Snapshot

A fast overview of what this wiki is organizing.

Start here if you want the big picture first. The goal is to give teams a compact shared language for thinking about LLM systems, from instruction design to validation and execution.

Core Frame

Model / Prompt / Context

These layers work together. The model brings capability, the prompt defines the task, and context supplies the background the model cannot safely infer by itself.

Execution

Tool-Augmented Work

In real workflows, the model should not stay isolated in chat. Tools let it search, read files, edit code, run tests, and verify assumptions against the environment.

Tooling

Harness / Checks / Handoff

A harness turns raw model output into a usable operating system with permissions, checkpoints, review criteria, and repeatable handoff patterns.

02 / Reading Path

A suggested order for absorbing the material quickly.

This wiki reads best from model limits to prompt structure to tool-enabled workflow. That sequence makes it easier to turn abstract ideas into reliable project behavior.

Suggested Flow

Reading Path

  1. Start with what the model can and cannot do reliably.
  2. Then separate the task prompt from the supporting context.
  3. Finally wrap the work in tools, checks, and a repeatable harness.
Use Cases

Where It Fits

  • Building onboarding material for teammates starting LLM work.
  • Reviewing why a prompt or workflow is underperforming.
  • Designing repeatable agent flows for engineering or content work.
  • Growing a lightweight wiki into a deeper AI knowledge base over time.
03 / Document

Read the PDF directly inside the page.

The original PDF is embedded below. Use this page for a quick conceptual pass, then continue reading here or open the PDF separately for the full document.