Main Content

lexrankScores

Document scoring with LexRank algorithm

Since R2020a

Description

example

scores = lexrankScores(documents) scores the specified documents for importance according to pairwise similarity values using the LexRank algorithm. The function uses cosine similarity, and computes importance using the PageRank algorithm.

example

scores = lexrankScores(bag) scores documents encoded by a bag-of-words or bag-of-n-grams model.

Examples

collapse all

Create an array of tokenized documents.

str = [
    "the quick brown fox jumped over the lazy dog"
    "the fast brown fox jumped over the lazy dog"
    "the lazy dog sat there and did nothing"
    "the other animals sat there watching"];
documents = tokenizedDocument(str)
documents = 
  4x1 tokenizedDocument:

    9 tokens: the quick brown fox jumped over the lazy dog
    9 tokens: the fast brown fox jumped over the lazy dog
    8 tokens: the lazy dog sat there and did nothing
    6 tokens: the other animals sat there watching

Calculate their LexRank scores.

scores = lexrankScores(documents);

Visualize the scores in a bar chart.

figure
bar(scores)
xlabel("Document")
ylabel("Score")
title("LexRank Scores")

Create a bag-of-words model from the text data in sonnets.csv.

filename = "sonnets.csv";
tbl = readtable(filename,'TextType','string');
textData = tbl.Sonnet;
documents = tokenizedDocument(textData);
bag = bagOfWords(documents)
bag = 
  bagOfWords with properties:

          Counts: [154x3527 double]
      Vocabulary: ["From"    "fairest"    "creatures"    "we"    "desire"    "increase"    ","    "That"    "thereby"    "beauty's"    "rose"    "might"    "never"    "die"    "But"    "as"    "the"    "riper"    "should"    ...    ] (1x3527 string)
        NumWords: 3527
    NumDocuments: 154

Calculate LexRank scores for each sonnet.

scores = lexrankScores(bag);

Visualize the scores in a bar chart.

figure
bar(scores)
xlabel("Document")
ylabel("Score")
title("LexRank Scores")

Input Arguments

collapse all

Input documents, specified as a tokenizedDocument array, a string array of words, or a cell array of character vectors. If documents is not a tokenizedDocument array, then it must be a row vector representing a single document, where each element is a word. To specify multiple documents, use a tokenizedDocument array.

Input bag-of-words or bag-of-n-grams model, specified as a bagOfWords object or a bagOfNgrams object. If bag is a bagOfNgrams object, then the function treats each n-gram as a single word.

Output Arguments

collapse all

LexRank scores, returned as a N-by-1 vector, where scores(i) corresponds to the score for the ith input document and N is the number of input documents.

References

[1] Erkan, Günes, and Dragomir R. Radev. "Lexrank: Graph-based Lexical Centrality as Salience in Text Summarization." Journal of Artificial Intelligence Research 22 (2004): 457-479.

Version History

Introduced in R2020a