Main Page/Resources/E-motions/SoundSystemObexampleb

From Atenea

Jump to: navigation, search

Contents

Introduction

The sound system models the transmission of a sounds a media, the Internet for instance. It consists of a soundmaker (e.g., a person) who, periodically, transmits a sound to a microphone. This one may be either connected to a media (the Internet) by means of a coder or directly to the media, which transports the sound to a speaker either by means of a decoder or not. Finally, when the sound reaches the speaker, it is amplified. If the media is full of packages (sounds), then the incoming sounds will be lost. The addition of observers to the system allows us to monitor some QoS properties of the system. In this example, we will be monitoring the throughput, mean time between failures and jitter.


System Modeling

Structure

The metamodel of the system is the same as the one in the example without observers:


Image:soundsystem.png


Now, apart from the metamodel of the system, we add a new one with the observers.


Image:soundsystemobservers.png


These observers are able to monitor the QoS properties of our system. Specifically, the properties that will be monitored are:

  • Throughput. It is the amount of work that can be performed or the amount of output that can be produced by a system or component in a given period of time. Throughput is defined as th = n/t, where n is the amount of work the system has performed and t is the time the system has been working. The work the system performs depends on the kind of system we are We include these observers in our system by combining the two metamodels.
  • Mean time between failures. It refers to the arithmetic mean (average) time between failures of a system. MTBF = t/f , where t is the time the system has been workingand f is the number of failures of the system.
  • Jitter. In the context of voice over IP, it is defined as a statistical variance of the RTP data package inter-arrival time. RTP (Real Transport Protocol) provides end-to-end network transport functions suitable for applications transmitting real-time data, such as audio, video or simulation data, over multicast or unicast network services. To estimate the jitter after we receive an i -th packet, we calculate the change of inter-arrival time, divide it by 16 to reduce noise, and add it to the previous jitter value. The division by 16 helps to reduce the influence of large random changes. The formula used is: j(i) = j(i−1)+(|(i−1,i)|−j(i−1))/16, where j(i) is the current jitter value and j(i −1) is the previous jitter value. In this jitter estimator formula, the value D(i,j) is the difference of relative transit times for the two packets. The difference is computed as D(i,j) = (Rj−Ri)−(Sj−Si), where Sj (Sendj) is the time the package j appears in the system (that is, the time at which it is sent by the transmitter) and Rj (Receivej) is the time the package j leaves the system because it has been processed (that is, the time at which it is received by the receiver).


We associate every class of the metamodel with a picture. This graphical concrete syntax will be used for the definition of the behavioral rules.


Image:imagesssobs.png



Behavior

To model the behavior of the system, and considering that it may have a coder and a decoder connected to the Internet, we have designed these rules:


Image:ruleObsSS2.png


The InitialRule creates the initial model. At the beginning we are going to have a soundmaker, a microphone, a media (the Internet), and a speaker. All of them have a position. The NAC pattern makes this rule be executed only once by specifying that if the soundmaker already exits, then the rule cannot be executed again.


Image:initSSObs.png


The GenSound periodic rule makes the soundmaker emit a sound every 3 time units. This rule makes use of an action execution element. This way, we explicitly forbid the execution of the rule if the same soundmaker is emitting another sound. This action execution states that the element sm (the soundmaker) is participating in an execution of the rule GenSound, so the rule cannot be applied if there is a match of this NAC. In the RHS pattern, we can see that the sound is now in the microphone, so it acquires its position. The sound has 20 decibels. A JitterIndOb observer is associated to the sound, and its timeStamp attribute stores the time the sound appeared in the system. The duration of the action modeled by this rule is one time unit.


Image:genSoundOb.png



The SoundFlowInternetSlow and SoundFlowInternetFast rules move a sound from the microphone to the Internet. The former rule is executed when there is not a coder between the microphone and the Internet and the latter one is executed when there is. In both, the sound acquires the position of the Internet. The time spent by the first rule is the Manhattan distance between the microphone and where the media is plus one. The second one spends one less time unit. Consequently, a sound is moved faster from the microphone to the Internet when there is a coder between them.


Image:soundFlowInternetSlow.png Image:soundFlowInternetFast.png


For moving the sound from the Internet until the speaker, we count on the rules SoundFlowSpeakerSlow and SoundFlowSpeakerFast. The former rule is executed when there is a decoder between the Internet and the speaker and the latter one is executed when there is not. The sound gets the position of the speaker and its decibels are amplified. The time the first rule spends is the Manhattan distance between the Internet and the speaker plus one. The second rule spends one less time unit. Therefore, a sound moves faster from the Internet until the speaker when there is a decoder between them.


Image:soundFlowSpeakerSlow.png Image:soundFlowSpeakerFast.png


The ConsumeSound rule updates the jitter value and an attribute of the throughput observer when a sound finally reaches the speaker. After updating these values, the sound and its associated observer disappear from the system.


Image:consumeSound.png


The Overload rule models the behavior when a sound is lost because the Internet has no more space for keeping more sounds. The loss of a sound is automatic in this case, so the rule spends 0 time units. The observer that monitors the mean time between failures is updated in this rule.


Image:overLoadObs.png


We use an ongoing rule, UpdateObs, to keep the throughput value updated at all time. Please not that the mtbf attribute is updated when a sound is lost, and the jitter value is updated when a sound reches the speaker. With the throughput, however, we make it to be updated at all time.


Image:updateObs2.png


Self-adapting the system

Apart from computing the QoS values for the system, observers can be very useful for defining alternative behaviors of the system, depending on the QoS levels. For instance, the system can self-adapt under certain conditions, since we are able to search for states of the system in which some attributes of the observers take certain values.

The system will self-adapt according to the value of throughput in the system. Let us consider that its optimal value is 1/7. As previously shown, sounds flow faster in the system when there are a coder and a decoder connected to the Internet, which means a higher throughput value. Two rules have been developed for achieving the self-adaptation of the system and keeping the throughput value around 1/7.

The first rule is called AddDevices. It is executed when the throughput value is below 1/7 and we do not have a coder and a decoder in the system. When this rule is applied, a coder and a decoder appear in the system and, consequently, sounds start to flow faster. After applying this rule the throughput value will tend to inrease.


Image:addDevices.png


The other rule, called RemovesDevices aims to do the opposite as the one presented. Thus, it is executed when the throughput value is above 1/7 and there are a coder and a decoder in the system. After this rule is applied, the throughput value will tend to decrease.


Image:removeDevices.png

Simulation

We can configure eMotions launcher to run the simulation as shown in the next figure. Please note that we do not need to specify an initial model of the system since we have a rule that creates it.

Image:ExecSimObs2.png


Should the user configure the eclipse launcher as it is specified in the picture, the resulting model will appear in the Result folder around one minute after the execution.

To display the resulting model in a tree view, it is needed to register the metamodel. To do that, right click on the .ecore (SoundSystem.ecore) and select Register metamodel. Then, right click on the model obtanied, select Open With -> Other... and select Sample Ecore Model Editor. Then, you can navigate the objects of the model. To see the objets' properties, right click on the panel where the tree view is and select Show Properties View.

Download

The SoundSystemObservers.zip file contains the project with all files required to try this example: the metamodel definition, the graphical concrete syntax for the metamodel and its corresponding behavioral specifications. To import this project, right click on the navigation view Import...-> General -> Existing Projects into Workspace -> Select archive file and then select the SoundSystemObservers.zip file.

References

A paper that describes this example can be found here .

Personal tools