Llama-cpp
This notebook goes over how to use Llama-cpp embeddings within LangChain
%pip install --upgrade --quiet llama-cpp-python
from langchain_community.embeddings import LlamaCppEmbeddings
API Reference:LlamaCppEmbeddings
llama = LlamaCppEmbeddings(model_path="/path/to/model/ggml-model-q4_0.bin")
text = "This is a test document."
query_result = llama.embed_query(text)
doc_result = llama.embed_documents([text])
Relatedโ
- Embedding model conceptual guide
- Embedding model how-to guides