nlrpBENCH: A Benchmark for Natural Language Requirements Processing

  • Name:

    Konferenzartikel 

  • Author:

    Walter F. Tichy
    Mathias Landhäußer
    Sven J. Körner

  • Zusammenfassung

    We present nlrpBENCH: a new platform and framework to improve software engineering research as well as teaching with focus on requirements engineering during the software engineering process. It is available on http://nlrp.ipd.kit.edu.

    Recent advances in natural language processing have made it possible to process textual software requirements automatically, for example checking them for flaws or translating them into software artifacts. This development is particularly fortunate, as the majority of requirements is written in unrestricted natural language. However, many of the tools in in this young area of research have been evaluated only on limited sets of examples, because there is no accepted benchmark that could be used to assess and compare these tools. To improve comparability and thereby accelerate progress, we have begun to assemble nlrpBENCH, a collection of requirements specifications meant both as a challenge for tools and a yardstick for comparison. We have gathered over 50 requirement texts of varying length and difficulty and organized them in benchmark sets. At present, there are two task types: model extraction (e.g., generating UML models) and text correction (e.g., eliminating ambiguities).

    Each text is accompanied by the expected result and metrics for scoring results. This paper describes the composition of the benchmark and the sources. Due to the brevity of this paper, we omit example tools comparisons which are also available.

Projekte
Titel

Bibtex

@inproceedings{TLK2015,
author={Walter F. Tichy, Mathias Landh{\"a}u{\ss}er, and Sven J. K{\"o}rner},
title={nlrpBENCH: A Benchmark for Natural Language Requirements Processing},
year=2015,
month=mar,
booktitle={Multikonferenz Software Engineering & Management 2015},
url={https://ps.ipd.kit.edu/downloads/},
abstract={We present nlrpBENCH: a new platform and framework to improve soft-
ware engineering research as well as teaching with focus on requirements engineering
during the software engineering process. It is available on http://nlrp.ipd.
kit.edu.
Recent advances in natural language processing have made it possible to process
textual software requirements automatically, for example checking them for flaws or
translating them into software artifacts. This development is particularly fortunate,
as the majority of requirements is written in unrestricted natural language. However,
many of the tools in in this young area of research have been evaluated only on limited
sets of examples, because there is no accepted benchmark that could be used to assess
and compare these tools. To improve comparability and thereby accelerate progress,
we have begun to assemble nlrpBENCH, a collection of requirements specifications
meant both as a challenge for tools and a yardstick for comparison.
We have gathered over 50 requirement texts of varying length and difficulty and
organized them in benchmark sets. At present, there are two task types: model extrac-
tion (e.g., generating UML models) and text correction (e.g., eliminating ambiguities).
Each text is accompanied by the expected result and metrics for scoring results. This
paper describes the composition of the benchmark and the sources. Due to the brevity
of this paper, we omit example tools comparisons which are also available.},
pptUrl={https://ps.ipd.kit.edu/downloads/},