The decentralized vector database delivering relevant results at any scale. Powered by a global network of miners — not a single cloud provider.
pip install rem-vectordbTrusted in production
from rem import REM
client = REM(api_key="rem_xxx")
# Create collection with AES-256-GCM encrypted fields
collection = client.create_collection("products", dimension=384,
encrypted_fields=["email", "pii_data"])
# Upsert vectors with metadata (encrypted fields auto-handled)
collection.upsert([
{"id": "p1", "values": embed("..."), "metadata": {
"category": "electronics", "price": 299.99}},
])
# Hybrid search: vector similarity + keyword matching + filters
results = collection.query(
vector=embed("wireless headphones"),
query_text="noise cancelling", # BM25 keyword boost
filter={"price": {"$lte": 500}}, # metadata filtering
top_k=10)Purpose-built for AI workloads. Decentralized by design.
Per-field metadata encryption with per-namespace keys. Vectors are obfuscated before reaching miners — your data stays private even on a decentralized network.
Combine vector similarity with BM25 keyword matching via Reciprocal Rank Fusion. Get the best of semantic understanding and exact keyword relevance.
Pinecone-compatible filter operators ($eq, $gt, $in, $and, $or and more). Filter results by any metadata field with zero performance overhead.
2,000+ miners across the globe. No single point of failure. Your data is replicated across 3 miners for redundancy.
Queries routed to the nearest miner. Distributed caching ensures consistent low-latency responses globally.
Execute up to 10 queries in a single API call. Perfect for recommendation engines, AI agents, and parallel retrieval pipelines.
Native LangChain and LlamaIndex integrations. Drop-in vector store that works with your existing RAG pipeline in minutes.
Complete vector lifecycle — upsert, query, fetch by ID, and delete. Built for production RAG with source document retrieval and GDPR compliance.
Free tier with generous credit. Pay-as-you-go pricing that's a fraction of centralized alternatives like Pinecone.
From RAG pipelines to recommendation engines — REM powers production AI at any scale.
Build retrieval-augmented generation with native LangChain and LlamaIndex support. Store document chunks with metadata, retrieve relevant context with hybrid search, and fetch source documents by ID for citations.
from rem.integrations.langchain import REMVectorStore
from langchain_openai import OpenAIEmbeddings
store = REMVectorStore(
api_key="rem_xxx",
collection_name="docs",
embedding=OpenAIEmbeddings()
)
store.add_texts(["Your documents here..."])
results = store.similarity_search("query", k=5)Go beyond keywords. Hybrid search combines vector similarity with BM25 keyword matching via Reciprocal Rank Fusion. Filter by any metadata field with Pinecone-compatible operators.
results = collection.query(
vector=embed("wireless headphones"),
query_text="noise cancelling", # BM25 boost
hybrid_alpha=0.5, # 50/50 blend
filter={"price": {"$lte": 500}},
top_k=10
)Give your AI agents long-term memory. Store conversation embeddings, tool outputs, and knowledge. Batch queries let agents search multiple memory banks in a single API call.
# Batch query across multiple memory types
results = collection.query_batch([
{"vector": embed("user question"), "top_k": 5,
"filter": {"type": "conversation"}},
{"vector": embed("user question"), "top_k": 3,
"filter": {"type": "tool_output"}},
{"vector": embed("user question"), "top_k": 3,
"filter": {"type": "knowledge"}},
])Power product and content recommendations with vector similarity. Use metadata filters for personalization, batch queries for multiple recommendation feeds, and real-time upserts as users interact.
# Find similar products, filtered by category
results = collection.query(
vector=user_preference_embedding,
top_k=20,
filter={
"category": {"$in": ["electronics", "gadgets"]},
"in_stock": True,
"price": {"$lte": 500}
}
)Native integrations with the frameworks you already use. Drop-in and go.
Sync and async clients with full type hints
pip install rem-vectordbDrop-in VectorStore for RAG chains
pip install rem-vectordb[langchain]Native VectorStore for index pipelines
pip install rem-vectordb[llamaindex]Prefer REST? Use any language with our REST API — just add X-API-Key header.
Three steps. No infrastructure to manage. No servers to provision.
Define your vector dimension, distance metric, and encrypted fields. Your collection is automatically distributed and encrypted across miners.
Upload embeddings with metadata. Sensitive fields are AES-256-GCM encrypted, vectors are obfuscated, and data is replicated across 3 miners.
Hybrid search combines vector similarity with BM25 keywords. Filter by metadata. Fetch source docs by ID. Batch queries for parallel retrieval.
Start free. Scale without surprises.
$20 free credit included
For growing applications
For production workloads
Free credit included. No credit card required. Deploy your first vector collection in under 60 seconds.