Memory for OpenClaw: From Zero to LanceDB Pro
Benchmarking three OpenClaw memory plugins on the LOCOMO dataset
Benchmarking three OpenClaw memory plugins on the LOCOMO dataset
OpenClaw and similar personal autonomous agents need a local-first long-term memory layer. LanceDB fits that role with embedded deployment, filesystem-native storage, and multimodal retrieval.
How we redesigned blob storage in Lance to make multimodal data a first-class citizen, with four storage semantics (Inline, Packed, Dedicated, External) that automatically adapt to your workload.
Lance file format 2.2 introduces Blob V2, nested schema evolution, native Map type support, and additional compression and performance improvements for AI/ML data workloads.
Native Lance support on Hugging Face Hub, Git-style branching and shallow clone for AI data, and Arrow-native geospatial with R-Tree indexing, plus steady OSS and community momentum.
How Lance's Arrow-native architecture enables first-class geospatial support through extension types, GeoDataFusion integration, and R-Tree indexing.
A deep dive into how table formats handle version management for ML/AI experimentation, and how Lance unifies branching, tagging, and shallow clone on top of its multi-base architecture.
Kicking off 2026 with Lance-native SQL retrieval via DuckDB, Uber-scale multi-bucket storage, 1.5M IOPS benchmarks, and continued OSS momentum across the Lance ecosystem.
Announcing native read support for Lance format on Hugging Face Hub. You can now distribute your large multimodal datasets as a single, searchable artifact (including blobs, embeddings and indexes) all in one place!
Learn how LanceDB benchmarks storage and how we achieved one million disk reads per second.