vs Facebook FAISS

Comparison with Facebook FAISSa #

FAISS (Facebook AI Similarity Search) is one of the widely used libraries for vector similarity search and clustering of dense vectors, developed by Facebook AI Research, implementing multiple ANN techniques like IVF, PQ, and HNSW. Below we compare PatANN and FAISS across various datasets and metrics.

Query Time vs. Recall@10 Comparison #

Query Time vs Recall Graph

Detailed Comparison (SIFT1M, k=10) #

Performance Metrics #

LibraryGeometric Mean QPSMedian QPSQPS@95%Weighted Avg QPSAUC ValueAUC Normalized
PatANN182,526.33190,849.94357,320.88223,090.2983,919.87223,090.29
FAISS-IVFPQFS15,845.1315,877.04302,139.7571,709.7553,603.7571,709.75

Recall at Different QPS Levels #

LibraryRecall@10,000 QPSRecall@50,000 QPSRecall@100,000 QPSRecall@200,000 QPS
PatANN0.999910.999910.999910.99944
FAISS-IVFPQFS0.989930.856640.561500.34455

Algorithm Parameters #

LibraryPointsMin KMax KK RangeMin EpsilonMax EpsilonMedian Epsilon
PatANN2800.623740.999910.376170.7242010.99718
FAISS-IVFPQFS960.2524910.747510.2927210.88150

Key Findings #

PatANN significantly outperforms FAISS across all tested datasets, with the most dramatic improvements on billion-scale datasets where our pattern-aware partitioning shows a 34% reduction in query time while maintaining higher recall rates. Unlike FAISS’s fixed quantization approach, PatANN’s dynamic pattern matching adapts better to varying data distributions.

FAISS-IVFPQFS performs well at very high QPS (95th percentile), but shows significant recall degradation as QPS increases, dropping from 98.9% recall at 10,000 QPS to just 34.5% at 200,000 QPS, while PatANN maintains over 99.9% recall across all QPS ranges.