
OpenSearch vs LanceDB for Vector Search: Query Cost and Infrastructure
Choosing a vector database usually comes down to a tradeoff between a full search service and an in-process library. This post showcases benchmarks that compare OpenSearch and LanceDB on the COCO 2017 images embedded with SigLIP. We measure ingestion throughput, query cost, storage layout, and overall infra cost.
Engineering
Case Study
opensearch-vs-lancedb-for-vector-search-query-cost-and-infrastructure
All Posts

Volcano Engine LAS's Lance-Based PB-Scale Autonomous Driving Data Lake Solution
How Bytedance Volcano Engine LAS (Lake for AI Service) leverages Lance as the core storage format, rapidly constructing a next-gen AI data lake to efficiently store, manage, and process multimodal data (text, images, audio/video).
Case Study
Autonomous Vehicles
volcano-engine-autonomous-driving-data-lake-solution

Lance JSON Support: Why You Might Not Really Need Variant
Lance's JSONB storage, scalar indexing, data evolution, and full-text search already deliver what most users want from Variant — with explicit control, schema consistency, and no vendor lock-in.
Engineering
lance-json-support-why-you-might-not-really-need-variant

📄 Lance Blob V2, 🤗 Upload Lance Datasets to HF Hub, 🦞 LanceDB for OpenClaw's Memory
Lance Blob V2 introduces adaptive storage semantics, easily upload Lance datasets to Hugging Face Hub, and OpenClaw establishes LanceDB as a default memory layer for agents, plus community and enterprise updates.
Community
newsletter-march-2026

Lance Format v2.2 Benchmarks: Half the Storage, None of the Slowdown
Benchmarks that show how Lance format v2.2 cuts storage by 50%+, beats Parquet on compression, and delivers up to 68x faster blob reads — while preserving the scan and random access patterns that multimodal training depends on.
Engineering
lance-format-v2-2-benchmarks-half-the-storage-none-of-the-slowdown










