2,000+ miners online

Performance at scale for
semantic search

The decentralized vector database delivering relevant results at any scale. Powered by a global network of miners — not a single cloud provider.

pip install rem-vectordb

Trusted in production

2,000+
Active miners
50M+
Vectors stored
<100ms
p95 latency
99.9%
Uptime
quickstart.py
from rem import REM

client = REM(api_key="rem_xxx")

# Create collection with AES-256-GCM encrypted fields
collection = client.create_collection("products", dimension=384,
    encrypted_fields=["email", "pii_data"])

# Upsert vectors with metadata (encrypted fields auto-handled)
collection.upsert([
  {"id": "p1", "values": embed("..."), "metadata": {
      "category": "electronics", "price": 299.99}},
])

# Hybrid search: vector similarity + keyword matching + filters
results = collection.query(
    vector=embed("wireless headphones"),
    query_text="noise cancelling",    # BM25 keyword boost
    filter={"price": {"$lte": 500}},  # metadata filtering
    top_k=10)

Why teams choose REM

Purpose-built for AI workloads. Decentralized by design.

AES-256-GCM Encryption

Per-field metadata encryption with per-namespace keys. Vectors are obfuscated before reaching miners — your data stays private even on a decentralized network.

Hybrid Search

Combine vector similarity with BM25 keyword matching via Reciprocal Rank Fusion. Get the best of semantic understanding and exact keyword relevance.

Metadata Filtering

Pinecone-compatible filter operators ($eq, $gt, $in, $and, $or and more). Filter results by any metadata field with zero performance overhead.

Decentralized Network

2,000+ miners across the globe. No single point of failure. Your data is replicated across 3 miners for redundancy.

Sub-100ms Latency

Queries routed to the nearest miner. Distributed caching ensures consistent low-latency responses globally.

Batch Operations

Execute up to 10 queries in a single API call. Perfect for recommendation engines, AI agents, and parallel retrieval pipelines.

Framework Integrations

Native LangChain and LlamaIndex integrations. Drop-in vector store that works with your existing RAG pipeline in minutes.

Full CRUD Operations

Complete vector lifecycle — upsert, query, fetch by ID, and delete. Built for production RAG with source document retrieval and GDPR compliance.

10x More Affordable

Free tier with generous credit. Pay-as-you-go pricing that's a fraction of centralized alternatives like Pinecone.

Built for every AI use case

From RAG pipelines to recommendation engines — REM powers production AI at any scale.

RAG Pipelines

Build retrieval-augmented generation with native LangChain and LlamaIndex support. Store document chunks with metadata, retrieve relevant context with hybrid search, and fetch source documents by ID for citations.

from rem.integrations.langchain import REMVectorStore
from langchain_openai import OpenAIEmbeddings

store = REMVectorStore(
    api_key="rem_xxx",
    collection_name="docs",
    embedding=OpenAIEmbeddings()
)
store.add_texts(["Your documents here..."])
results = store.similarity_search("query", k=5)

Semantic Search

Go beyond keywords. Hybrid search combines vector similarity with BM25 keyword matching via Reciprocal Rank Fusion. Filter by any metadata field with Pinecone-compatible operators.

results = collection.query(
    vector=embed("wireless headphones"),
    query_text="noise cancelling",  # BM25 boost
    hybrid_alpha=0.5,               # 50/50 blend
    filter={"price": {"$lte": 500}},
    top_k=10
)

AI Agents

Give your AI agents long-term memory. Store conversation embeddings, tool outputs, and knowledge. Batch queries let agents search multiple memory banks in a single API call.

# Batch query across multiple memory types
results = collection.query_batch([
    {"vector": embed("user question"), "top_k": 5,
     "filter": {"type": "conversation"}},
    {"vector": embed("user question"), "top_k": 3,
     "filter": {"type": "tool_output"}},
    {"vector": embed("user question"), "top_k": 3,
     "filter": {"type": "knowledge"}},
])

Recommendations

Power product and content recommendations with vector similarity. Use metadata filters for personalization, batch queries for multiple recommendation feeds, and real-time upserts as users interact.

# Find similar products, filtered by category
results = collection.query(
    vector=user_preference_embedding,
    top_k=20,
    filter={
        "category": {"$in": ["electronics", "gadgets"]},
        "in_stock": True,
        "price": {"$lte": 500}
    }
)

Integrates with your stack

Native integrations with the frameworks you already use. Drop-in and go.

Python SDK

Sync and async clients with full type hints

pip install rem-vectordb

LangChain

Drop-in VectorStore for RAG chains

pip install rem-vectordb[langchain]

LlamaIndex

Native VectorStore for index pipelines

pip install rem-vectordb[llamaindex]

Prefer REST? Use any language with our REST API — just add X-API-Key header.

From zero to production in minutes

Three steps. No infrastructure to manage. No servers to provision.

STEP 01

Create a Collection

Define your vector dimension, distance metric, and encrypted fields. Your collection is automatically distributed and encrypted across miners.

STEP 02

Upsert Vectors

Upload embeddings with metadata. Sensitive fields are AES-256-GCM encrypted, vectors are obfuscated, and data is replicated across 3 miners.

STEP 03

Search & Retrieve

Hybrid search combines vector similarity with BM25 keywords. Filter by metadata. Fetch source docs by ID. Batch queries for parallel retrieval.

Simple, transparent pricing

Start free. Scale without surprises.

Free

$0/month

$20 free credit included

  • $20 free credit on signup
  • Hybrid search & filtering
  • AES-256-GCM encryption
  • LangChain & LlamaIndex
  • 60 requests/min
  • Community support
Get Started

Business

$99.99/month

For production workloads

  • 10M vectors included
  • 100M queries included
  • Hybrid search & filtering
  • Batch queries (10 per call)
  • Dedicated support
  • 99.99% uptime SLA
Upgrade to Business

Start building with REM

Free credit included. No credit card required. Deploy your first vector collection in under 60 seconds.