Are you an LLM? Read llms.txt for a summary of the docs, or llms-full.txt for the full context.
Skip to content

Proxy Mode

Use proxy mode when your meta-aggregation layer runs as a service and clients fetch quotes from that service, instead of calling provider APIs directly from the client.

When to Use It

  • You want provider API keys and provider-selection logic on the server.
  • You want a single policy point for rate limits, allowlists, and request validation.
  • You run multi-region services and want region-aware provider configuration.

Client Configuration

import { proxy } from "@spandex/core";
import { SpandexProvider } from "@spandex/react";
 
<SpandexProvider
  config={{
    proxy: proxy({
      pathOrUrl: "/api",
      delegatedActions: ["prepareSimulatedQuotes"],
    }),
  }}
>
  {children}
</SpandexProvider>;

With proxy configured, you must declare at least one delegated action. Action names match both the function names and the path suffixes on your service.

Delegation Model

Proxy mode now uses two streamed server primitives:

  • prepareQuotes for raw quotes
  • prepareSimulatedQuotes for quotes that have already been simulated on the server

Client APIs like getQuotes and getQuote stay local. They compose over streamed simulated quotes, which keeps custom selection logic available on the client while still moving provider access and simulation infrastructure to the server.

How It Works

  1. Client calls quote APIs/hooks (getQuotes, getQuote, getRawQuotes, useQuotes, etc).
  2. AggregatorProxy serializes swap params into query params and calls your endpoint.
  3. prepareQuotes uses a streaming response (newStream(...)) so quotes arrive progressively.
  4. prepareSimulatedQuotes uses a streaming response (newStream(...)) so simulated quotes arrive progressively.
  5. getQuotes collects the streamed simulated quotes locally, and getQuote selects locally.
  6. For fast-return strategies like fastest, the client cancels the stream after selection. That abrupt close propagates to the remote fetch and can stop unnecessary work.

Related APIs:

Hono Service Example

This pattern is implemented in examples/hono/src/index.ts.

import { Hono } from "hono";
import { stream } from "hono/streaming";
import { zValidator } from "@hono/zod-validator";
import {
  newStream,
  prepareQuotes,
  prepareSimulatedQuotes,
  quoteStreamErrorHandler,
  simulatedQuoteStreamErrorHandler,
  type Quote,
  type SimulatedQuote,
  type SwapParams,
} from "@spandex/core";
 
const app = new Hono();
 
app.get("/prepareQuotes", zValidator("query", querySchema), async (c) => {
  const swap = c.req.valid("query") satisfies SwapParams;
  return stream(c, async (streamWriter) => {
    const prepared = await prepareQuotes<Quote>({
      swap,
      config,
      mapFn: (quote: Quote) => Promise.resolve(quote),
    });
    await streamWriter.pipe(newStream<Quote>(prepared, quoteStreamErrorHandler));
  });
});
 
app.get("/prepareSimulatedQuotes", zValidator("query", querySchema), async (c) => {
  const swap = c.req.valid("query") satisfies SwapParams;
  return stream(c, async (streamWriter) => {
    const prepared = await prepareSimulatedQuotes({
      swap,
      config,
    });
    await streamWriter.pipe(
      newStream<SimulatedQuote>(prepared, simulatedQuoteStreamErrorHandler),
    );
  });
});

React Streaming and Async Iterators

useQuotes defaults to streamed results (streamResults: true) and uses TanStack streamedQuery. Internally, quote/simulation promises are exposed as an AsyncIterable so each result can be yielded as soon as it is ready.

This is what enables progressive quote rendering instead of waiting for all providers to finish.