Skip to main content

Table 6 Measures (M) of Precision (Pr), Recall (Rc) and \(\hbox {F}_1\) scores for Overall Dataset and Task A

From: Paraphrase type identification for plagiarism detection using contexts and word embeddings

Embedding /Alignment

M

Overall

Task A

ConceptNet Numberbatch

FastText

GloVe

ConceptNet Numberbatch

FastText

GloVe

SW Alg.

Pr

0.76158

0.68496

0.71786

0.66757

0.61279

0.63582

Rc

0.84658

0.82085

0.78619

0.80403

0.77855

0.76629

\(\hbox {F}_1\)

0.80184

0.74678

0.75047

0.72947

0.68580

0.69498

Meteor

Pr

0.74246

0.65843

0.70000

0.64333

0.58004

0.61300

Rc

0.79774

0.77281

0.74832

0.74151

0.70151

0.70661

\(\hbox {F}_1\)

0.76911

0.71105

0.72335

0.68894

0.63502

0.65648

Sultan

Pr

0.74671

0.66075

0.70408

0.66085

0.58692

0.61435

Rc

0.75142

0.73845

0.70532

0.67048

0.65437

0.63557

\(\hbox {F}_1\)

0.74906

0.69744

0.70470

0.66563

0.61881

0.62478

  1. Bold values refer to the highest values of \(\hbox {F}_1\) scores of each task