E5-large-v2

Comprehensive information on the functionality and api usage of the E5-large-v2 model

API Reference: Embeddings API

Model Reference: E5-large-v2

Paper: Text Embeddings by Weakly-Supervised Contrastive Pre-training

The E5-large-v2 model consists of 12 layers and has an embedding dimension of 768.

LayersEmbedding DimensionRecommended Sequence Length
121024512

Recommended Scoring Methods

  • cosine-similarity

Using Instruction

Each input text should commence with either "query" or "passage", this includes non-English texts. For tasks apart from retrieval, the "query" instruction is generally used.

The instruction parameter is included in the request. By default, we calculate the embeddings using the instruction "query".

Examples

Calculate Sentence similarities

The following code illustrates how to compute similarities between two scientific sentences using the cosine similarity score function. Note that the instruction is set to "query".

Similarities
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
import requests
from sklearn.metrics.pairwise import cosine_similarity

headers = {'Content-Type': 'application/json', 'Authorization': 'Bearer {EMBAAS_API_KEY}'}

science_sentences = [
    "Parton energy loss in QCD matter",
    "The Chiral Phase Transition in Dissipative Dynamics"
]

instruction = "query"

data = {
    'texts': science_sentences,
    'model': 'e5-large-v2',
    'instruction': instruction
}

response = requests.post("https://api.embaas.io/v1/embeddings/", json=data, headers=headers)
embeddings = response.json()["data"]

similarities = cosine_similarity([embeddings[0]["embedding"]], [embeddings[1]["embedding"]])
print(similarities)

Information Retrieval

The following code snippet demonstrates the retrieval of information related to a specific query from a given corpus. Note that the instruction for the query is "query" and for the corpus is "passage".

Retrieval
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
import requests
import numpy as np
from sklearn.metrics.pairwise import cosine_similarity

headers = {'Content-Type': 'application/json', 'Authorization': 'Bearer {EMBAAS_API_KEY}'}

def get_embeddings(texts, model, instruction=None):
    data = {'texts': texts, 'model': model}
    if instruction:
        data['instruction'] = instruction
    response = requests.post("https://api.embaas.io/v1/embeddings/", json=data, headers=headers)
    embeddings = [entry['embedding'] for entry in response.json()['data']]
    return np.array(embeddings)

query_instruction = "query"
query_text = "where is the food stored in a yam plant"

corpus_instruction = "passage"
corpus_texts = [
    "Capitalism has been dominant in the Western world since the end of feudalism, but most feel[who?] that the term 'mixed economies' more precisely describes most contemporary economies, due to their containing both private-owned and state-owned enterprises. In capitalism, prices determine the demand-supply scale. For example, higher demand for certain goods and services lead to higher prices and lower demand for certain goods lead to lower prices.",
    "The disparate impact theory is especially controversial under the Fair Housing Act because the Act regulates many activities relating to housing, insurance, and mortgage loans—and some scholars have argued that the theory's use under the Fair Housing Act, combined with extensions of the Community Reinvestment Act, contributed to rise of sub-prime lending and the crash of the U.S. housing market and ensuing global economic recession",
    "Disparate impact in United States labor law refers to practices in employment, housing, and other areas that adversely affect one group of people of a protected characteristic more than another, even though rules applied by employers or landlords are formally neutral. Although the protected classes vary by statute, most federal civil rights laws protect based on race, color, religion, national origin, and sex as protected traits, and some laws include disability status and other traits as well."
]

model_name = "e5-large-v2"

query_embeddings = get_embeddings([query_text], model_name, query_instruction)
corpus_embeddings = get_embeddings(corpus_texts, model_name, corpus_instruction)

similarities = cosine_similarity(query_embeddings, corpus_embeddings)
retrieved_doc_id = np.argmax(similarities)
print(corpus_texts[retrieved_doc_id])