Why We Chose Prism PHP for Our 9-Agent Orchestration

TL;DR: Building an AI-native forge is not just about sending prompts. It is about predictable orchestration. We integrated Prism — the Laravel AI SDK — into the heart of Craftly, and it has changed how we think about agentic PHP systems.

The Problem with Raw API Calls

When we started prototyping Craftly's 9-agent pipeline, the obvious first step was to hit the Anthropic API directly. A Guzzle client, a few environment variables, a prompt string — and you have something working in an afternoon. The problem shows up later, when "working" is not enough.

Raw API calls tie your code to a single provider's request shape. Every agent in the pipeline needs its own boilerplate for retries, timeouts, and error handling. Structured output becomes a manual JSON-parsing exercise. Tool calling requires hand-rolling the function schema every time. And if you ever want to swap Claude for GPT-4 on a specific agent — or run a local Ollama model in development — you are rewriting rather than configuring.

That is the problem Prism solves cleanly.

Provider Agility Without Abstraction Overhead

Prism wraps Anthropic, OpenAI, Mistral, Gemini, Ollama, and several other providers behind a single fluent interface. In practice, this means our MasterArchitect agent runs on claude-sonnet-4-6 for its strong reasoning, while the Forge Workers run on claude-haiku-4-5 to keep generation costs low — and both share identical orchestration code. The model is just a parameter.

$response = Prism::text()
    ->using(Provider::Anthropic, 'claude-haiku-4-5-20251001')
    ->withSystemPrompt($workerSystemPrompt)
    ->withPrompt($architectureContext)
    ->generate();

Swapping the Forge Workers to a different provider in a future iteration requires changing one line. No refactoring, no new HTTP client, no updated response parser.

Structured Output as a First-Class Concept

A multi-agent pipeline lives and dies on the quality of data passed between agents. If the MasterArchitect produces a schema that the Forge Workers cannot reliably parse, you get silent failures — generated files that look plausible but are subtly wrong.

Prism's structured output support lets us define PHP value objects and have the model populate them directly, with Prism handling the schema generation and response coercion. The MasterArchitect does not return a blob of JSON for us to decode defensively — it returns a typed ArchitectureSchema object, validated on arrival.

This is where we recovered most of the ~5% edge-case failure rate that plagued the earlier raw-API version. Validation failures are now surfaced at the schema boundary, not discovered three agents later when a migration references a column that was never defined.

Tool Calling Without the Boilerplate

The Validator agent — the final stage of the pipeline — needs to call out to external tools: running php artisan route:list, checking that generated model relationships resolve, verifying that migrations are syntactically valid. Prism's tool calling interface lets us register these as first-class tools with typed inputs and outputs.

The agent decides which tools to invoke, Prism handles the function-call loop, and our tool implementations stay clean PHP methods rather than tangled callback arrays. The result is a Validator that reads like a Laravel service class, not a prompt engineering experiment.

PHP as an Agentic Platform

The broader point is this: the PHP ecosystem does not need to import its AI patterns from the Python world. The agent-as-queue-worker model maps perfectly onto Laravel's job infrastructure. Prism gives us the model abstraction layer. Structured output gives us the inter-agent contract. Tool calling gives us the ability to reach outside the model when ground truth matters.

The result is a pipeline where each agent is a Laravel job: testable in isolation, retry-able on failure, observable via Horizon, and deployable on the same infrastructure as the rest of the application. No Python microservices, no separate orchestration layer, no operational complexity that does not already exist in a standard Laravel stack.

Where This Leaves Us

Craftly's Laravel generator currently runs in under 2 minutes wall-clock and costs roughly $0.19 in model inference per full application. The cost breakdown reflects the two-tier model strategy: the MasterArchitect (Sonnet 4.6, heavy reasoning) accounts for the majority of that figure; the Forge Workers (Haiku 4.5, execution at scale) keep the per-file cost negligible.

Prism was not the only option. We evaluated calling the Anthropic SDK directly, wrapping it ourselves, and using a Python orchestrator over HTTP. The reason we landed on Prism is simple: it is the right abstraction at the right level for a Laravel-native agentic system. It gives us provider flexibility, structured contracts between agents, and tool calling — without adding a dependency that fights the framework.

If you are building AI-powered features in Laravel and reaching for raw HTTP clients, take a look at Prism first. The ergonomics are worth it.

Learn more about Craftly → · Get in touch →