Traditional vector search relies on single, fixed-size embeddings (dense vectors) for documents and queries. While powerful, this approach can lose nuanced, token-level details.
-
Multi-vector search, used in models like ColBERT or ColPali, replaces a single document or image vector with a set of per-token vectors. This enables a "late interaction" mechanism, where fine-grained similarity is calculated term-by-term to boost retrieval accuracy.
-
Higher Accuracy: By matching at a granular, token-level, FastPlaid captures subtle relevance that single-vector models simply miss.
-
PLAID: stands for Per-Token Late Interaction Dense Search.
-
Blazing Performance: Engineered in Rust and optimized for GPUs.
pip install fast-plaidFastPlaid is available in multiple versions to support different PyTorch versions:
| FastPlaid Version | PyTorch Version | Installation Command |
|---|---|---|
| 1.4.1.290 | 2.9.0 | pip install fast-plaid==1.4.1.290 |
| 1.4.1.280 | 2.8.0 | pip install fast-plaid==1.4.1.280 |
| 1.4.1.271 | 2.7.1 | pip install fast-plaid==1.4.1.271 |
| 1.4.1.270 | 2.7.0 | pip install fast-plaid==1.4.1.270 |
You can add FastPlaid to your project dependencies with version ranges to ensure compatibility:
For requirements.txt:
fast-plaid>=1.4.1.270,<=1.4.1.290
For pyproject.toml:
[project]
dependencies = [
"fast-plaid>=1.4.1.270,<=1.4.1.290"
]For setup.py:
install_requires=[
"fast-plaid>=1.4.1.270,<=1.4.1.290"
]Choose the appropriate version range based on your PyTorch requirements.
Building from Source:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
pip install git+https://github.com/lightonai/fast-plaid.git
Get started with creating an index and performing a search in just a few lines of Python.
import torch
from fast_plaid import search
fast_plaid = search.FastPlaid(index="index", device="cpu") # or "cuda" for GPU.
# Leave blank for auto-detect, including multi-GPU.
# On CPU, specifying device speeds up initialization.
embedding_dim = 128
# Index 100 documents, each with 300 tokens, each token is a 128-dim vector.
fast_plaid.create(
documents_embeddings=[torch.randn(300, embedding_dim) for _ in range(100)]
)
# Search for 2 queries, each with 50 tokens, each token is a 128-dim vector
scores = fast_plaid.search(
queries_embeddings=torch.randn(2, 50, embedding_dim),
top_k=10,
)
print(scores)The output will be a list of lists, where each inner list contains tuples of (document_index, similarity_score) for the top top_k results for each query:
[
[
(20, 1334.55),
(91, 1299.57),
(59, 1285.78),
(10, 1273.53),
(62, 1267.96),
(44, 1265.55),
(15, 1264.42),
(34, 1261.19),
(19, 1261.05),
(86, 1260.94),
],
[
(58, 1313.85),
(75, 1313.82),
(79, 1305.32),
(61, 1304.45),
(64, 1303.67),
(68, 1302.98),
(66, 1301.23),
(65, 1299.78),
],
]import torch
from fast_plaid import search
fast_plaid = search.FastPlaid(index="index") # Load an existing index
embedding_dim = 128
fast_plaid.update(
documents_embeddings=[torch.randn(300, embedding_dim) for _ in range(100)]
)
scores = fast_plaid.search(
queries_embeddings=torch.randn(2, 50, embedding_dim),
top_k=10,
)
print(scores)The .update() method efficiently adds new documents to an existing index while automatically maintaining centroid quality through a buffered expansion mechanism:
-
Buffered Updates: New documents are first accumulated in a buffer. When the buffer reaches the
buffer_sizethreshold (default: 100 documents), the system triggers a centroid expansion check. -
Automatic Centroid Expansion: During expansion, embeddings that are far from all existing centroids (outliers) are identified using a distance threshold computed during index creation. These outliers are then clustered using K-means to create new centroids, which are appended to the existing set.
-
Efficient Small Updates: For small batches of documents (below
buffer_size), the update is performed immediately without centroid expansion, ensuring fast incremental updates.
This approach balances efficiency with accuracy: small updates are fast, while larger batches automatically adapt the index structure to accommodate new data distributions.
You can restrict your search to a specific subset of documents by using the subset parameter in the .search() method. This is useful for implementing metadata filtering or searching within a pre-defined collection.
The subset parameter accepts a list of IDs. These IDs correspond directly to the order of insertion, starting from 0. For example, if you index 100 documents with .create(), they will have IDs 0 through 99. If you then add 50 more documents with .update(), they will be assigned the subsequent IDs 100 through 149.
You can provide a single list of IDs to apply the same filter to all queries, or a list of lists to specify a different filter for each query.
import torch
from fast_plaid import search
fast_plaid = search.FastPlaid(index="index") # Load an existing index
# Apply a single filter to all queries
# Search for the top 5 results only within documents [2, 5, 10, 15, 18]
scores = fast_plaid.search(
queries_embeddings=torch.randn(2, 50, 128), # 2 queries
top_k=5,
subset=[2, 5, 10, 15, 18]
)
print(scores)
# Apply a different filter for each query
# Query 1: search within documents [0, 1, 2, 3, 4]
# Query 2: search within documents [10, 11, 12, 13, 14]
scores = fast_plaid.search(
queries_embeddings=torch.randn(2, 50, 128), # 2 queries
top_k=5,
subset=[
[0, 1, 2, 3, 4],
[10, 11, 12, 13, 14]
]
)
print(scores)Providing a subset filter can significantly speed up the search process, especially when the subset is much smaller than the total number of indexed documents. In order to increase the recall when applying drastic filtering, consider increasing the n_ivf_probe parameter in the .search() method (default: 8). It controls the number of clusters to search within the index for each query. Only clusters that contain documents from the provided subset are considered during the search.
Parameter Default Memory Speed Description
low_memory True lower = less VRAM usage True = slower No effect when device is "cpu", only used when device is GPU. Load index tensors on CPU and move to device only when needed. Accelerate search at the cost of higher VRAM usage when False.Parameter Default Speed Accuracy Description
n_samples_kmeans None lower = faster lower = less precise Number of samples to compute centroids
nbits 4 lower = faster lower = less precise product quantization bits
kmeans_niters 4 higher = slower indexing higher = better clusters K-means iterationsParameter Default Speed Accuracy Description
n_ivf_probe 8 higher = slower higher = better recall cluster probes per query
n_full_scores 4096 higher = slower higher = better ranking candidates for full scoringParameter Default Description
buffer_size 100 Number of documents to accumulate before triggering centroid expansion
start_from_scratch 999 Rebuild index from scratch if fewer documents exist
kmeans_niters 4 K-means iterations for centroid expansion
max_points_per_centroid 256 Maximum points per centroid during expansion
FastPlaid significantly outperforms the original PLAID engine across various datasets, delivering comparable accuracy with faster indexing and query speeds.
NDCG@10 Indexing Time (s) Queries per seconds (QPS)
dataset size library
arguana 8674 PLAID 0.46 4.30 56.73
FastPlaid 0.46 4.72 155.25 (+174%)
fiqa 57638 PLAID 0.41 17.65 48.13
FastPlaid 0.41 12.62 146.62 (+205%)
nfcorpus 3633 PLAID 0.37 2.30 78.31
FastPlaid 0.37 2.10 243.42 (+211%)
quora 522931 PLAID 0.88 40.01 43.06
FastPlaid 0.87 11.23 281.51 (+554%)
scidocs 25657 PLAID 0.19 13.32 57.17
FastPlaid 0.18 10.86 157.47 (+175%)
scifact 5183 PLAID 0.74 3.43 67.66
FastPlaid 0.75 3.16 190.08 (+181%)
trec-covid 171332 PLAID 0.84 69.46 32.09
FastPlaid 0.83 45.19 54.11 (+69%)
webis-touche2020 382545 PLAID 0.25 128.11 31.94
FastPlaid 0.24 74.50 70.15 (+120%)All benchmarks were performed on an H100 GPU. It's important to note that PLAID relies on Just-In-Time (JIT) compilation. This means the very first execution can exhibit longer runtimes. To ensure our performance analysis is representative, we've excluded these initial JIT-affected runs from the reported results. In contrast, FastPlaid does not employ JIT compilation, so its performance on the first run is directly indicative of its typical execution speed.
FastPlaid builds upon the groundbreaking work of the original PLAID engine Santhanam, Keshav, et al..
You can cite FastPlaid in your work as follows:
@misc{fastplaid2025,
author = {Sourty, Raphaël},
title = {FastPlaid: A High-Performance Engine for Multi-Vector Search},
year = {2025},
url = {https://github.com/lightonai/fast-plaid}
}And for the original PLAID research:
@inproceedings{santhanam2022plaid,
title={{PLAID}: an efficient engine for late interaction retrieval},
author={Santhanam, Keshav and Khattab, Omar and Potts, Christopher and Zaharia, Matei},
booktitle={Proceedings of the 31st ACM International Conference on Information \& Knowledge Management},
pages={1747--1756},
year={2022}
}
The FastPlaid class is the core component for building and querying multi-vector search indexes. It's designed for high performance, especially when leveraging GPUs.
To create an instance of FastPlaid, you'll provide the directory where your index will be stored and specify the device(s) for computation.
class FastPlaid:
def __init__(
self,
index: str,
device: str | list[str] | None = None,
low_memory: bool = True,
) -> None:index: str
The file path to the directory where your index will be saved or loaded from.
device: str | list[str] | None = None
Specifies the device(s) to use for computation.
- If None (default) and CUDA is available, it defaults to "cuda".
- If CUDA is not available, it defaults to "cpu".
- You should specify the device to accelerate the initialization of FastPlaid index especially when using CPUs.
- Can be a single device string (e.g., "cuda:0" or "cpu").
- Can be a list of device strings (e.g., ["cuda:0", "cuda:1"]).
- If multiple GPUs are specified and available, multiprocessing is automatically set up for parallel execution.
Remember to include your code within an `if __name__ == "__main__":` block for proper multiprocessing behavior.
low_memory: bool = True
If True, the index is loaded in a memory-efficient manner, keeping tensors on CPU and moving them to the target device only when needed. This reduces VRAM usage at the cost of some performance. No effect when device is "cpu".
The create method builds the multi-vector index from your document embeddings. It uses K-means clustering to organize your data for efficient retrieval.
def create(
self,
documents_embeddings: list[torch.Tensor] | torch.Tensor,
kmeans_niters: int = 4,
max_points_per_centroid: int = 256,
nbits: int = 4,
n_samples_kmeans: int | None = None,
batch_size: int = 50_000,
seed: int = 42,
use_triton_kmeans: bool | None = None,
metadata: list[dict[str, Any]] | None = None,
) -> "FastPlaid":documents_embeddings: list[torch.Tensor] | torch.Tensor
A list where each element is a PyTorch tensor representing the multi-vector embedding for a single document.
Each document's embedding should have a shape of `(num_tokens, embedding_dimension)`. Can also be a single tensor of shape `(num_documents, num_tokens, embedding_dimension)`.
kmeans_niters: int = 4 (optional)
The number of iterations for the K-means algorithm used during index creation.
This influences the quality of the initial centroid assignments.
max_points_per_centroid: int = 256 (optional)
The maximum number of points (token embeddings) that can be assigned to a single centroid during K-means.
This helps in balancing the clusters.
nbits: int = 4 (optional)
The number of bits to use for product quantization.
This parameter controls the compression of your embeddings, impacting both index size and search speed.
Lower values mean more compression and potentially faster searches but can reduce accuracy.
n_samples_kmeans: int | None = None (optional)
The number of samples to use for K-means clustering.
If `None`, it defaults to a value based on the number of documents.
This parameter can be adjusted to balance between speed, memory usage and
clustering quality. If you have a large dataset, you might want to set this to a
smaller value to speed up the indexing process and save some memory.
batch_size: int = 50_000 (optional)
Batch size for processing embeddings during index creation.
seed: int = 42 (optional)
Seed for the random number generator used in index creation.
Setting this ensures reproducible results across multiple runs.
use_triton_kmeans: bool | None = None (optional)
Whether to use the Triton-based K-means implementation.
If `None`, it will be set to True if the device is not "cpu".
Triton-based implementation can provide better performance on GPUs.
Set to False to ensure perfectly reproducible results across runs.
metadata: list[dict[str, Any]] | None = None (optional)
An optional list of metadata dictionaries corresponding to each document being indexed.
Each dictionary can contain arbitrary key-value pairs that you want to associate with the document.
If provided, the length of this list must match the number of documents being indexed.
The metadata will be stored in a SQLite database within the index directory for filtering during searches.
The update method provides an efficient way to add new documents to an existing index while automatically maintaining centroid quality. It uses a buffered expansion mechanism: documents are accumulated until reaching buffer_size, at which point embeddings far from existing centroids are identified and used to create new centroids that are appended to the index structure. This ensures the index adapts to new data distributions over time.
def update(
self,
documents_embeddings: list[torch.Tensor] | torch.Tensor,
metadata: list[dict[str, Any]] | None = None,
batch_size: int = 50_000,
kmeans_niters: int = 4,
max_points_per_centroid: int = 256,
n_samples_kmeans: int | None = None,
seed: int = 42,
start_from_scratch: int = 999,
buffer_size: int = 100,
use_triton_kmeans: bool | None = False,
) -> "FastPlaid":documents_embeddings: list[torch.Tensor]
A list where each element is a PyTorch tensor representing the multi-vector embedding for a single document.
Each document's embedding should have a shape of `(num_tokens, embedding_dimension)`.
This method will add these new embeddings to the existing index.
metadata: list[dict[str, Any]] | None = None
An optional list of metadata dictionaries corresponding to each new document being added.
Each dictionary can contain arbitrary key-value pairs that you want to associate with the document.
If provided, the length of this list must match the number of new documents being added.
The metadata will be stored in a SQLite database within the index directory for filtering during searches.
batch_size: int = 50_000 (optional)
Batch size for processing embeddings during the update.
kmeans_niters: int = 4 (optional)
The number of iterations for the K-means algorithm when creating new centroids during expansion.
max_points_per_centroid: int = 256 (optional)
The maximum number of points per centroid when creating new centroids.
n_samples_kmeans: int | None = None (optional)
The number of samples to use for K-means clustering during centroid expansion.
If None, it defaults to a value based on the number of documents.
seed: int = 42 (optional)
Seed for the random number generator used during centroid expansion.
start_from_scratch: int = 999 (optional)
If the existing index has fewer documents than this threshold, the index will be
completely rebuilt from scratch instead of being updated incrementally.
buffer_size: int = 100 (optional)
Number of documents to accumulate before triggering centroid expansion.
When the buffer reaches this size, outlier embeddings (far from existing centroids)
are identified and clustered to create new centroids that are appended to the index.
use_triton_kmeans: bool | None = False (optional)
Whether to use the Triton-based K-means implementation during centroid expansion.
The search method lets you query the created index with your query embeddings and retrieve the most relevant documents.
def search(
self,
queries_embeddings: torch.Tensor | list[torch.Tensor],
top_k: int = 10,
batch_size: int = 25_000,
n_full_scores: int = 4096,
n_ivf_probe: int = 8,
show_progress: bool = True,
subset: list[list[int]] | list[int] | None = None,
) -> list[list[tuple[int, float]]]:queries_embeddings: torch.Tensor | list[torch.Tensor]
A PyTorch tensor representing the multi-vector embeddings of your queries.
Its shape should be `(num_queries, num_tokens_per_query, embedding_dimension)`.
Can also be a list of tensors, each representing a separate query. All tensors in the list must have the same embedding dimension.
top_k: int = 10 (optional)
The number of top-scoring documents to retrieve for each query.
batch_size: int = 25_000 (optional)
The internal batch size used for processing queries.
A larger batch size might improve throughput on powerful GPUs but can consume more memory.
n_full_scores: int = 4096 (optional)
The number of candidate documents for which full (re-ranked) scores are computed.
This is a crucial parameter for accuracy; higher values lead to more accurate results but increase computation.
n_ivf_probe: int = 8 (optional)
The number of inverted file list "probes" to perform during the search.
This parameter controls the number of clusters to search within the index for each query.
Higher values improve recall but increase search time.
show_progress: bool = True (optional)
If set to `True`, a progress bar will be displayed during the search operation.
subset: list[list[int]] | list[int] | None = None (optional)
An optional list of lists of integers or a single list of integers. If provided, the search
for each query will be restricted to the document IDs in the corresponding inner list.
- If a single list is provided, the same filter will be applied to all queries.
- If a list of lists is provided, each inner list corresponds to the filter for each query.
- Document IDs correspond to the order of insertion, starting from 0.
The delete method allows to permanently remove embeddings from the index based on their insertion order IDs. If a metadata database exists, the corresponding entries will also be automatically removed. To update an existing embedding, you should delete it first and then add the new version with the .update() method. Warning, when using the delete method, the remaining documents are re-indexed to maintain a sequential order. If you delete document k, all documents with id > k will have their id decreased by 1.
def delete(
self,
subset: list[int],
) -> "FastPlaid":subset: list[int]
A list of embeddings IDs to delete from the index. The IDs are based on the original
order of insertion, starting from 0. After deletion, the remaining documents are
re-indexed to maintain a sequential order.
Any contributions to FastPlaid are welcome! If you have ideas for improvements, bug fixes, or new features, please open an issue or submit a pull request. We are particularly interested in:
- Additional algorithms for multi-vector search.
- New search outputs formats for better integration with existing systems.
- Performance optimizations for CPU and GPU operations.
FastPlaid includes a lightweight, optional, built-in metadata filtering engine powered by SQLite. When you provide the metadata parameter during .create() or .update(), FastPlaid automatically stores this information in a searchable database within your index directory.
You can then use the fast_plaid.filtering.where() function to query this database using standard SQL conditions. This function returns a list of embeddings IDs that match your criteria, which you can pass directly to the subset parameter of the .search() method to pre-filter your search.
from datetime import date
import torch
from fast_plaid import filtering, search
# 1. Initialize the FastPlaid index
fast_plaid = search.FastPlaid(index="metadata_index")
embedding_dim = 128
# 2. Create initial documents with metadata
initial_embeddings = [torch.randn(10, embedding_dim) for _ in range(3)]
initial_metadata = [
{"name": "Alice", "category": "A", "join_date": date(2023, 5, 17)},
{"name": "Bob", "category": "B", "join_date": date(2021, 6, 21)},
{"name": "Alex", "category": "A", "join_date": date(2023, 8, 1)},
]
fast_plaid.create(documents_embeddings=initial_embeddings, metadata=initial_metadata)
# 3. Update the index with new documents and metadata
new_embeddings = [torch.randn(10, embedding_dim) for _ in range(2)]
new_metadata = [
{"name": "Charlie", "category": "B", "join_date": date(2020, 3, 15)},
{
"name": "Amanda",
"category": "A",
"join_date": date(2024, 1, 10),
"status": "active",
},
]
fast_plaid.update(documents_embeddings=new_embeddings, metadata=new_metadata)
# 4. Use filtering.where to get the corresponding rows of the FastPlaid index
# which match the SQL condition
subset = filtering.where(
index="metadata_index",
condition="name LIKE ? AND join_date > ?",
parameters=("A%", "2022-12-31"),
)
# 5. Perform a search restricted to the filtered subset
query_embedding = torch.randn(1, 5, embedding_dim)
scores = fast_plaid.search(queries_embeddings=query_embedding, top_k=3, subset=subset)
print("Search results within the subset:")
print(scores)
# 5. Access to the metadata of the retrieved documents
for match in scores:
print("Metadata of matched documents:")
print(filtering.get(index="metadata_index", subset=[subset for subset, _ in match]))You can also rely on the existing subset parameter of the .search() method to filter candidates based on the order of insertion or rely on an external filtering system as providing the metadata parameter is optional.
