Sync API¶Reproducibility is critical for AI. For code, it's easy to keep track of changes using Github or Gitlab. For data, it's not as easy. Most of the time, we're manually writing complicated data tracking code, wrestling with an external tool, and dealing with expensive duplicate snapshot copies with low granularity. While working with most other vector databases, if we loaded in the wrong data (or any other such mistakes), we have to blow away the index, correct the mistake, and then completely rebuild it. It's really difficult to rollback to an earlier state, and any such corrective action destroys historical data and evidence, which may be useful down the line to debug and diagnose issues. To our knowledge, LanceDB is the first and only vector database that supports full reproducibility and rollbacks natively. Taking advantage of the Lance columnar data format, LanceDB supports: Automatic versioning Instant rollback Appends, updates, deletions Schema evolution This makes auditing, tracking, and reproducibility a breeze! Let's see how this all works. Pickle Rick!¶ Let's first prepare the data. We will be using a CSV file with a bunch of quotes from Rick and Morty In [1]: Copied! !wget http://vectordb-recipes.s3.us-west-2.amazonaws.com/rick_and_morty_quotes.csv !head rick_and_morty_quotes.csv !wget http://vectordb-recipes.s3.us-west-2.amazonaws.com/rick_and_morty_quotes.csv !head rick_and_morty_quotes.csv --2024-12-17 11:54:43-- http://vectordb-recipes.s3.us-west-2.amazonaws.com/rick_and_morty_quotes.csv Resolving vectordb-recipes.s3.us-west-2.amazonaws.com (vectordb-recipes.s3.us-west-2.amazonaws.com)... 52.92.138.34, 3.5.82.160, 52.218.236.161, ... Connecting to vectordb-recipes.s3.us-west-2.amazonaws.com (vectordb-recipes.s3.us-west-2.amazonaws.com)|52.92.138.34|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 8236 (8.0K) [text/csv] Saving to: ‘rick_and_morty_quotes.csv.1’ rick_and_morty_quot 100%[===================>] 8.04K --.-KB/s in 0s 2024-12-17 11:54:43 (77.8 MB/s) - ‘rick_and_morty_quotes.csv.1’ saved [8236/8236] id,author,quote 1,Rick," Morty, you got to come on. You got to come with me." 2,Morty," Rick, what’s going on?" 3,Rick," I got a surprise for you, Morty." 4,Morty," It’s the middle of the night. What are you talking about?" 5,Rick," I got a surprise for you." 6,Morty," Ow! Ow! You’re tugging me too hard." 7,Rick," I got a surprise for you, Morty." 8,Rick," What do you think of this flying vehicle, Morty? I built it out of stuff I found in the garage." 9,Morty," Yeah, Rick, it’s great. Is this the surprise?" Let's load this into a pandas dataframe. It's got 3 columns, a quote id, the quote string, and the first name of the author of the quote: In [2]: Copied! import pandas as pd df = pd.read_csv("rick_and_morty_quotes.csv") df.head() import pandas as pd df = pd.read_csv("rick_and_morty_quotes.csv") df.head() Out[2]: id author quote 0 1 Rick Morty, you got to come on. You got to come wi... 1 2 Morty Rick, what’s going on? 2 3 Rick I got a surprise for you, Morty. 3 4 Morty It’s the middle of the night. What are you ta... 4 5 Rick I got a surprise for you. We'll start with a local LanceDB connection In [3]: Copied! !pip install lancedb -q !pip install lancedb -q In [ ]: Copied! import lancedb db = lancedb.connect("~/.lancedb") import lancedb db = lancedb.connect("~/.lancedb") Creating a LanceDB table from a pandas dataframe is straightforward using create_table: In [5]: Copied! db.drop_table("rick_and_morty", ignore_missing=True) table = db.create_table("rick_and_morty", df) table.head().to_pandas() db.drop_table("rick_and_morty", ignore_missing=True) table = db.create_table("rick_and_morty", df) table.head().to_pandas() Out[5]: id author quote 0 1 Rick Morty, you got to come on. You got to come wi... 1 2 Morty Rick, what’s going on? 2 3 Rick I got a surprise for you, Morty. 3 4 Morty It’s the middle of the night. What are you ta... 4 5 Rick I got a surprise for you. Updates¶ Now, since Rick is the smartest man in the multiverse, he deserves to have his quotes attributed to his full name: Richard Daniel Sanchez. This can be done via LanceTable.update. It needs two arguments: A where string filter (sql syntax) to determine the rows to update A dict of values where the keys are the column names to update and the values are the new values In [6]: Copied! table.update(where="author='Rick'", values={"author": "Richard Daniel Sanchez"}) table.to_pandas() table.update(where="author='Rick'", values={"author": "Richard Daniel Sanchez"}) table.to_pandas() Out[6]: id author quote 0 2 Morty Rick, what’s going on? 1 4 Morty It’s the middle of the night. What are you ta... 2 6 Morty Ow! Ow! You’re tugging me too hard. 3 9 Morty Yeah, Rick, it’s great. Is this the surprise? 4 11 Morty What?! A bomb?! ... ... ... ... 94 80 Richard Daniel Sanchez There you are, Morty. Listen to me. I got an ... 95 82 Richard Daniel Sanchez It’s pretty obvious, Morty. I froze him. Now ... 96 84 Richard Daniel Sanchez Do you have any concept of how much higher th... 97 86 Richard Daniel Sanchez I’ll do it later, Morty. He’ll be fine. Let’s... 98 97 Richard Daniel Sanchez There she is. All right. Come on, Morty. Let’... 99 rows × 3 columns Schema evolution¶ Ok so this is a vector database, so we need actual vectors. We'll use sentence transformers here to avoid having to deal with API keys. Let's create a basic model using the "all-MiniLM-L6-v2" model and embed the quotes: In [7]: Copied! from sentence_transformers import SentenceTransformer model = SentenceTransformer("all-MiniLM-L6-v2", device="cpu") vectors = model.encode(df.quote.values.tolist(), convert_to_numpy=True, normalize_embeddings=True).tolist() from sentence_transformers import SentenceTransformer model = SentenceTransformer("all-MiniLM-L6-v2", device="cpu") vectors = model.encode(df.quote.values.tolist(), convert_to_numpy=True, normalize_embeddings=True).tolist() We can then convert the vectors into a pyarrow Table and merge it to the LanceDB Table. For the merge to work successfully, we need to have an overlapping column. Here the natural choice is to use the id column: In [8]: Copied! from lance.vector import vec_to_table import numpy as np import pyarrow as pa from lance.vector import vec_to_table import numpy as np import pyarrow as pa In [9]: Copied! embeddings = vec_to_table(vectors) embeddings = embeddings.append_column("id", pa.array(np.arange(len(table))+1)) embeddings.to_pandas().head() embeddings = vec_to_table(vectors) embeddings = embeddings.append_column("id", pa.array(np.arange(len(table))+1)) embeddings.to_pandas().head() Out[9]: vector id 0 [-0.10369808, -0.038807657, -0.07471153, -0.05... 1 1 [-0.11813704, -0.0533092, 0.025554786, -0.0242... 2 2 [-0.09807682, -0.035231438, -0.04206024, -0.06... 3 3 [0.032292824, 0.038136397, 0.013615396, 0.0335... 4 4 [-0.050369408, -0.0043397923, 0.013419108, -0.... 5 And now we'll use the LanceTable.merge function to add the vector column into the LanceTable: In [10]: Copied! table.merge(embeddings, left_on="id") table.head().to_pandas() table.merge(embeddings, left_on="id") table.head().to_pandas() Out[10]: id author quote vector 0 2 Morty Rick, what’s going on? [-0.11813704, -0.0533092, 0.025554786, -0.0242... 1 4 Morty It’s the middle of the night. What are you ta... [0.032292824, 0.038136397, 0.013615396, 0.0335... 2 6 Morty Ow! Ow! You’re tugging me too hard. [-0.035019904, -0.070963725, 0.003859435, -0.0... 3 9 Morty Yeah, Rick, it’s great. Is this the surprise? [-0.12578955, -0.019364933, 0.01606114, -0.082... 4 11 Morty What?! A bomb?! [0.0018287548, 0.07033146, -0.023754105, 0.047... If we look at the schema, we see that all-MiniLM-L6-v2 produces 384-dimensional vectors: In [11]: Copied! table.schema table.schema Out[11]: id: int64 author: string quote: string vector: fixed_size_list<item: float>[384] child 0, item: float Rollback¶ Suppose we used the table and found that the all-MiniLM-L6-v2 model doesn't produce ideal results. Instead we want to try a larger model. How do we use the new embeddings without losing the change history? First, major operations are automatically versioned in LanceDB. Version 1 is the table creation, with the initial insertion of data. Versions 2 and 3 represents the update (deletion + append) Version 4 is adding the new column. In [12]: Copied! table.list_versions() table.list_versions() Out[12]: [{'version': 1, 'timestamp': datetime.datetime(2024, 12, 17, 11, 57, 21, 613932), 'metadata': {}}, {'version': 2, 'timestamp': datetime.datetime(2024, 12, 17, 11, 57, 21, 626525), 'metadata': {}}, {'version': 3, 'timestamp': datetime.datetime(2024, 12, 17, 11, 57, 27, 91378), 'metadata': {}}, {'version': 4, 'timestamp': datetime.datetime(2024, 12, 17, 11, 58, 4, 513085), 'metadata': {}}] We can restore version 3, before we added the old vector column In [13]: Copied! table.restore(3) table.head().to_pandas() table.restore(3) table.head().to_pandas() Out[13]: id author quote 0 2 Morty Rick, what’s going on? 1 4 Morty It’s the middle of the night. What are you ta... 2 6 Morty Ow! Ow! You’re tugging me too hard. 3 9 Morty Yeah, Rick, it’s great. Is this the surprise? 4 11 Morty What?! A bomb?! Notice that we now have one more, not less versions. When we restore an old version, we're not deleting the version history, we're just creating a new version where the schema and data is equivalent to the restored old version. In this way, we can keep track of all of the changes and always rollback to a previous state. In [14]: Copied! table.list_versions() table.list_versions() Out[14]: [{'version': 1, 'timestamp': datetime.datetime(2024, 12, 17, 11, 57, 21, 613932), 'metadata': {}}, {'version': 2, 'timestamp': datetime.datetime(2024, 12, 17, 11, 57, 21, 626525), 'metadata': {}}, {'version': 3, 'timestamp': datetime.datetime(2024, 12, 17, 11, 57, 27, 91378), 'metadata': {}}, {'version': 4, 'timestamp': datetime.datetime(2024, 12, 17, 11, 58, 4, 513085), 'metadata': {}}, {'version': 5, 'timestamp': datetime.datetime(2024, 12, 17, 11, 58, 27, 153807), 'metadata': {}}] Switching Models¶Now we'll switch to the all-mpnet-base-v2 model and add the vectors to the restored dataset again. Note that this step can take a couple of minutes. In [ ]: Copied! model = SentenceTransformer("all-mpnet-base-v2", device="cpu") vectors = model.encode(df.quote.values.tolist(), convert_to_numpy=True, normalize_embeddings=True).tolist() embeddings = vec_to_table(vectors) embeddings = embeddings.append_column("id", pa.array(np.arange(len(table))+1)) table.merge(embeddings, left_on="id") model = SentenceTransformer("all-mpnet-base-v2", device="cpu") vectors = model.encode(df.quote.values.tolist(), convert_to_numpy=True, normalize_embeddings=True).tolist() embeddings = vec_to_table(vectors) embeddings = embeddings.append_column("id", pa.array(np.arange(len(table))+1)) table.merge(embeddings, left_on="id") In [16]: Copied! table.schema table.schema Out[16]: id: int64 author: string quote: string vector: fixed_size_list<item: float>[768] child 0, item: float Deletion¶What if the whole show was just Rick-isms? Let's delete any quote not said by Rick: In [17]: Copied! table.delete("author != 'Richard Daniel Sanchez'") table.delete("author != 'Richard Daniel Sanchez'") We can see that the number of rows has been reduced to 30 In [18]: Copied! len(table) len(table) Out[18]: 28 Ok we had our fun, let's get back to the full quote set In [20]: Copied! table.restore(6) table.restore(6) In [21]: Copied! len(table) len(table) Out[21]: 99 History¶We now have 9 versions in the data. We can review the operations that corresponds to each version below: In [22]: Copied! table.version table.version Out[22]: 8 Versions: 1 - Create and append 2 - Update (deletion) 3 - Update (append) 4 - Merge (vector column) 5 - Restore (4) 6 - Merge (new vector column) 7 - Deletion 8 - Restore Summary¶ We never had to explicitly manage the versioning. And we never had to create expensive and slow snapshots. LanceDB automatically tracks the full history of operations and supports fast rollbacks. In production this is critical for debugging issues and minimizing downtime by rolling back to a previously successful state in seconds.