About Why Leaderboard Models Roadmap Team Paper GitHub

The Next Generation of
Query Processing

GenDB is a Generative Query Engine that uses LLM agents to generate instance-optimized query execution code, tailored to your specific data, workloads, and hardware.

2.4x
faster than DuckDB on TPC-H
6.8x
faster than DuckDB on SEC-EDGAR
280x
faster than PostgreSQL
What is GenDB?

Synthesized, Not Engineered

Five specialized LLM agents collaborate through a structured pipeline to generate optimized storage, indexes, and standalone native executables — all tailored to the specific data, workload, and hardware.

GenDB System Overview
Agent 1

Workload Analyzer

Profiles hardware, samples data, extracts workload characteristics

Agent 2

Storage Designer

Designs layouts with encoding, compression, indexes, and zone maps

Agent 3

Query Planner

Generates resource-aware execution plans adapted to data and hardware

Agent 4

Code Generator

Implements plans as optimized native code with SIMD and parallelism

Agent 5

Query Optimizer

Iteratively refines code using runtime profiling feedback

Why GenDB?

A Third Option

Today, every new use case demands either a painful extension or an entirely new system:

Option 1 — Extend an existing system

PostgreSQL → PostGIS, TimescaleDB, pgvector, Citus, AGE …
Each extension fights the host system’s architectural constraints.

Option 2 — Build a new system

DuckDB, Umbra, ClickHouse, Milvus, Pinecone, InfluxDB, Neo4j …
Each requires years of engineering and huge monetary costs.

Option 3 — Generate

Use LLMs to generate per-query execution code. No extension wrestling, no multi-year engineering. New techniques become reachable through prompt updates.

Performance

Instance-optimized code exploits exact data distributions, join selectivities, group cardinalities, and hardware characteristics. No general-purpose engine can match this.

Extensibility

Integrating new techniques requires prompting, not re-engineering. Semantic queries, GPU-native code — all reachable through prompt updates.

$

Economics

80% of queries repeat in 50% of clusters. Generation cost is amortized over many executions, making it cost-effective for recurring analytical workloads.

Leaderboard

Performance Rankings

Total query execution time across all queries. GenDB variants use different LLM backbone models. All systems run on identical hardware with full parallelism enabled.

TPC-H (SF10, ~10GB)
SEC-EDGAR (3yr, ~5GB)
# System Total Time vs. Best GenDB Relative
# System Total Time vs. Best GenDB Relative
Model Comparison

Generation Cost & Speed

Different LLM backbone models offer different trade-offs between generated code quality, generation time, and cost. Ranked by average query execution time.

Roadmap

What’s Next

GenDB is under active development. Every step follows three principles:

Higher Quality
More Robust
Lower Cost
Completed

OLAP Workloads

Multi-agent pipeline for analytical queries. Evaluated on TPC-H and SEC-EDGAR, outperforming DuckDB, Umbra, ClickHouse, MonetDB, and PostgreSQL.

Self-Evolving Agent Memory

Agents learn from past runs, accumulate optimization experience, and improve generation quality over time — without retraining the underlying LLMs.

Planned

GPU-Native Code Generation

Generate CUDA and GPU-accelerated code targeting libcudf for cost-efficient GPU analytics, not just CPU.

Planned

Semantic Query Processing

Generate code for multimodal data — images, audio, text — with AI-powered operators, moving beyond SQL’s relational model.

Planned

… and more

Reusable operators across queries, query template generation, hybrid execution with traditional DBMS, and further cost reduction as LLMs become faster and cheaper.

Team

GenDB Team

Built at Cornell Database Group.