Main Page/Resources/MTBenchmark

From Atenea

Jump to: navigation, search

Contents

Overview

In this website we present a proposal for a benchmark suite for Eclipse-based model transformation (MT) languages and engines. It is designed to evaluate their scalability and performance for out-place MT scenarios, and to provide an independent, common, and objective yardstick that MT languages can use to compare among themselves. It can also be used by software engineers to decide which MT language they should employ.

The benchmark is composed of a set of tests, each one dealing with a particular out-place MT scenario. Each scenario has been devised to stretch a particular feature of a MT, measuring its impact on the overall performance. The suite also provides a set of pre-defined input models for all scenarios, of different sizes and types, to evaluate the scalability of MT engines, and also for making the benchmark tests results comparable and repeatable among languages.

Case Studies

We have created the following dedicated web pages where we explain in more detail every case study composing the benchmark and the links to download the EMF metamodels and input models.

Running the benchmark

We have studied with our benchmark the scalability and the time performance of four different languages: ATL[1], LinTra[2], QVT-O[3] and RubyTL[4].

All the transformations have been executed using Eclipse and we have only taken into account the time in the execution of the transformation, what means that we do not consider the time used by loading the models into memory. Regarding ATL, QVT-O and the Linda approach, we have been launching sets of them programmatically and sequentially by means of Java code that invokes the model transformation using the API provided by the languages, and we have registered the computation times using the System.currentTimeMillis() Java method.

We have run our benchmark in a machine whose operating system is Ubuntu 12.04 64 bits with 11.7 Gb of RAM and 16 cores of 2.67GHz. For ATL, QVT-O and LinTra, the Eclipse version is Kepler, and the Java version is 7, where the JVM memory as been increased with the parameters -Xms512m and -Xmx10240M in order to be able to allocate larger models in memory. The version of ATL we have used is 3.4.0 and 3.3.0 for QVT-O. Regarding Ruby, its version is 1.8.6, and the Rake version (needed to run sequentially transformations) is 0.7.2. The Eclipse version in this case is Juno, since no new version of RubyTL is available since 2007.

In the following we provide the links to download the Java projects and the explanation about how to reproduce the experiments we have made.

ATL

The ATL project with all the transformations and the Java runners to execute the whole benchmark sequentially can be obtained from here.

The Java runners are in /ATLBenchmarkExecution/src/Class2RelationalRunner/files/simulator and they are named: ATLRunnerIdentity.Java, ATLRunnerClass2Rel.java, ATLRunnerClass2Java.java, ATLRunnerJavaqueries.java and ATLRunnerMatching.java. Before running the benchmark, the only requirement is that the path where the input models are must be specified. At the top of every runner file, there is a main method which is the responsible of running the benchmark and it contains the string variable that needs to be set to the path where the input models are in the file system.

It is worth mentioning that some languages need a cold-start phase and ATL is one of them. It means that the first transformation always takes more than the followings. To solve this problem, our runners first heats the virtual machine by running a transformation whose result is discarded, and only considers the results after the warm-up phase.

Finally, we have to mention that we experimented some problems with executing the complete benchmark at once. In particular, the origin of the problem is that the models are not completely unloaded until the program stops. Therefore, the RAM memory is full with the smaller models and the big ones do not have enough space to be stored which causes the exception "java.lang.OutOfMemoryError: Java heap space", while when running them manually, the transformation is executed.

LinTra

LinTra is an improved version of a concurrent implementation made by ourselves that we initially introduced in [2] for specifying model transformations. We model the Linda approach, where all the objects are stored in a distributed tuple space (so that they can be spread in different machines) that can be, in turn, concurrently accessed. However, in this benchmark, we consider a preliminary version of the tuple space built as a pure Java solution, where the tuple space has been modeled with the Java HashMap data structure. In this way, the tuple space allows concurrent and fast access to the elements stored in it but it lacks the necessary mechanisms to support distribution.

The three projects with the LinTra implementation of the case studies can be downloaded from here. In order to execute the benchmark, the file /LinTra/src/runners/LinTraBenchmarkRunner.java must be launched after setting up in that same file the value of the variables that point to the path where the input models are.

QVT-O

The required QVT-O projects can be downloaded from here. The Java runner file to execute the benchmark automatically is /QVTO_Runner/src/QVTRunner.java which must be updated with the appropriate value for the variables that point to the input model paths before running it.

Since QVT-O is working with EMF models as ATL does, we experimented the same kind of problems with the model loading and model unloading. That forced us to run the benchmark for the biggest models manually.

We noticed that the first time that any transformation is executed, the time it takes is higher than the following times that it is executed. Because of that, the solution we provide warms-up the QVT-O virtual machine by executing every transformation with a small model before running it with the real models.

RubyTL

The projects to launch the transformations developed in RubyTL are available here. In order to obtain RubyTL, please consult this website. Since RubytL is a model transformation defined as a Ruby internal DSL, also the latter has to be installed. The Eclipse version necessary to run RubyTL is Juno. Information about how to launch transformation taks (stored in rake files) can be found in this manual. In our projects, such rake files can be found under the folder Execute Transformation. There are versions for launching transformations both sequentially or manually one by one. We have used the predefined method t.benchmark with the parameter :execution to obtain the execution times taken by the transformations.

Results

Finally, we present the results(*) we have obtained and discuss them briefly for the different case studies after executing each one 5 times for every input model (except for the models with one million of classes, which were executed only twice).

The execution times (expressed in seconds) for our benchmark can be found in the following excel files: Identity.xlsx, Class2Relational.xlsx, Class2Java.xlsx, JavaQueries.xlsx and MatchingResults.xlsx.

(*) Note that some transformations take very long time that is why the tables are not complete. As soon as we have more results, we will upload them to this website.

References

[1] Jouault, F., Allilaire, F., B´ezivin, J., Kurtev, I.: ATL: A model transformation tool. SCP 72(1-2) (2008) 31–39

[2] Burgueño, L., Troya, J.,Wimmer, M., Vallecillo, A.: On the Concurrent Execution of Model Transformations with Linda. In: Proc. of the Workshop on Scalability in Model Driven Engineering (BigMDE), ACM (2013)

[3] OMG: MOF QVT Final Adopted Specification. Object Management Group. (2005)

[4] Cuadrado, J.S., Molina, J.G., Tortosa, M.M.: RubyTL: A Practical, Extensible Transformation Language. In: Proc. of ECMFA. Volume 4066 of LNCS. Springer (2006) 158–172

Personal tools