MCP Server live — AI agents can now query 105M+ SEC facts. Connect your agent →
ValueinValuein
MCP Tools/get_compute_ready_stream
Bulk Access

Compute-Ready Stream

get_compute_ready_stream
sample
sp500
full

Returns a presigned R2 URL for direct bulk Parquet access. Use for loading full tables into DuckDB, Spark, or Pandas without going through the Bulk Data API. Presigned URLs expire in 1 hour. Supports all 11 tables — 8 core + 3 derived analytic tables.

Example Call

# Using the Valuein MCP server from Python (via MCP SDK)
# Or call directly from Claude / Cursor after setup

result = await client.call_tool(
    "get_compute_ready_stream",
    arguments={
    "table": "EXAMPLE"
}
)
print(result)

Direct tool call:get_compute_ready_stream(table="fact")

Try it now

No token required

Paste this in your terminal — the sample tier returns real S&P500 data without authentication.

# No auth required — sample tier covers S&P500 with a 5-year window.
# Add an Authorization: Bearer header for full universe and history.
$ curl -X POST https://mcp.valuein.biz/mcp \
    -H "Content-Type: application/json" \
    -d '{
        "jsonrpc": "2.0",
        "id": 1,
        "method": "tools/call",
        "params": {
          "name": "get_compute_ready_stream",
          "arguments": {
            "table": "AAPL"
          }
        }
      }'

Inputs

ParameterTypeRequiredDescription
tablestring
required
Table name. Core: entity, security, filing, fact, valuation, taxonomy_guide, index_membership, references. Derived: ratio, factor_scores, earnings_signals.
formatstringoptionalOutput format: parquet (default) or wide_parquet for pre-pivoted wide tables.

Output Fields

urlexpires_attableplanestimated_size_mb

Example Response

{
  "url": "https://r2.valuein.biz/signed/fact.parquet?token=...",
  "expires_at": "2024-03-15",
  "table": "fact",
  "plan": "sp500",
  "estimated_size_mb": 342
}

Returns a presigned URL for fact.parquet — load directly with pd.read_parquet(url) or duckdb.read_parquet(url).

Notes

URL expires in 60 minutes. The fact table is large (several GB on full plan) — use DuckDB's lazy reading with filters rather than loading the full table into memory.