Panel: Context-Dependent Evaluation of Tools for NL RE Tasks: Recall vs. Precision, and beyond

Link:
Autor/in:
Verlag/Körperschaft:
IEEE
Erscheinungsjahr:
2017
Medientyp:
Text
Schlagworte:
  • Abstraction finding
  • Ambiguity finding
  • App review analysis
  • False negatives
  • False positives
  • Information retrieval
  • Natural language processing
  • Precision
  • Recall
  • Requirements specification defect finding
  • Tracing
Beschreibung:
  • {[}Context and Motivation] Natural language processing has been used since the 1980s to construct tools for performing natural language (NL) requirements engineering (RE) tasks. The RE field has often adopted information retrieval (IR) algorithms for use in implementing these NL RE tools. {[}Problem] Traditionally, the methods for evaluating an NL RE tool have been inherited from the IR field without adapting them to the requirements of the RE context in which the NL RE tool is used. {[}Principal Ideas] This panel discusses the problem and considers the evaluation of tools for a number of NL RE tasks in a number of contexts. {[}Contribution] The discussion is aimed at helping the RE field begin to consistently evaluate each of its tools according to the requirements of the tool's task.
Lizenz:
  • info:eu-repo/semantics/closedAccess
Quellsystem:
Forschungsinformationssystem der UHH

Interne Metadaten
Quelldatensatz
oai:www.edit.fis.uni-hamburg.de:publications/7ab220e5-220d-4bc0-9b80-ca603698f839