Main Page/Resources/E-motions/SoundSystemexample

From Atenea

Jump to: navigation, search



The sound system models the transmission of a sound via a media, the Internet for instance. It consists of a soundmaker (e.g., a person) who, periodically, transmits a sound to a microphone. This one is connected to a media (the Internet), which transports the sound to a speaker. Finally, when the sound reaches the speaker, it is amplified. If the media is full of packages (sounds), then the incoming sounds will be lost.

System Modeling


The metamodel of this system is shown here:


The system is composed of elements which have a specific position. We can have sound makers, media (for this example, Coder and Decoder metaclasses can be ignored, they are mentioned in the example with Observers), microphones, speakers and sounds.

We associate every class of the metamodel with a picture. This graphical concrete syntax will be used for the definition of the behavioral rules.



We have designed five rules to model the behavior of the system, including the rule that initializes the system.


The InitialRule creates the initial model. This model is composed of a soundmaker, a microphone, a media (the Internet), and a speaker. All of them have a position. The NAC pattern forbids the triggering of the rule if the soundmaker already existed, i.e., it forbids to trigger this rule more than once.


The GenSound periodic rule makes the soundmaker emit a sound every 3 time units. This rule makes use of an action execution element. This way, we explicitly forbid the execution of the rule if the same soundmaker is emitting another sound. This action execution states that the element sm (the soundmaker) is participating in an execution of the rule GenSound, so the rule cannot be applied if there is a match of this NAC. In the RHS, we can see that the sound is now in the microphone, so it acquires its position. The sound has 20 decibels. The duration of the action modeled by this rule is one time unit.


The SoundFlowInternet rule moves a sound from the microphone to the Internet. The sound acquires the position of the latter. The time spent by this rule is the Manhattan distance between the microphone and where the media is plus one.


For moving the sound from the Internet until the speaker, we count on the rule SoundFlowSpeaker. The sound gets the position of the speaker and its decibels are amplified. The time this rule spends is the Manhattan distance between the Internet and the speaker plus one.


The Overload rule models the behavior when a sound is lost when the media has no more space for receiving more sounds. A sound spends one time unit in getting lost.



We can configure eMotions launcher to run the simulation as shown in the next figure. Please note that we do not need to specify an initial model of the system since we have a rule that creates it.


Should the user configure the eclipse launcher as it is specified in the picture, the resulting model will appear in the Result folder around one minute after the execution.

To display the resulting model in a tree view, it is needed to register the metamodel. To do that, right click on the .ecore (SoundSystem.ecore) and select Register metamodel. Then, right click on the model obtanied, select Open With -> Other... and select Sample Ecore Model Editor. Then, you can navigate the objects of the model. To see the objets' properties, right click on the panel where the tree view is and select Show Properties View.


The file contains the project with all files required to try this example: the metamodel definition, the graphical concrete syntax for the metamodel and its corresponding behavioral specifications. To import this project, right click on the navigation view Import...-> General -> Existing Projects into Workspace -> Select archive file and then select the file.


A paper that describes this example can be found here .

Personal tools