Main Page/Resources/E-motions/QNMs

From Atenea

Jump to: navigation, search

Contents

Overview

Queuing Network Models (QNMs) provide powerful notations and tools for modeling and analyzing the performance of many different kinds of systems. They have been intensively used to study the effects of resource contention on the performance and scalability of computer and communication systems. Here we show our proposal for QNMs' semantics in terms of a generic behavioral model specified by means of a fixed set of behavioral rules. These high-level rules can be executed, hence allowing users to conduct simulations and obtain performance metrics for QNMs using different probability distributions for arrival and service time.

For the supporting tool where QNMs can be specified and simulated, please go to [2].

System Modeling

The modeling of the system consits of:

  • 1) Defining the abstract syntax, i.e., describing the structure of QNMs (by means of a metamodel).
  • 2) Defining the concrete syntax, which is the graphical notation given to the elements in the metamodel.
  • 3) Defining the semantics, for which we have developed a set of behavioral rules.

The abstract and concrete syntaxes are shown in [2]. However, although the metamodel shown in [2], named ePMIF, is used to depict the models in the tool graphical user interface, we use another metamodel, internal to our tool and transparent to users, in order to internally perform the simulations in e-Motions ([3]). We describe that metamodel in the following subsection.

Internal QNMs Structural Model for e-Motions

In our approach, QNMs are modeled using the ePMIF metamodel ([1],[2]). Of course, being ePMIF an extension of PMIF 2 [4], PMIF 2 models can be used too - in ePMIF we tried to respect the structure of PMIF 2 precisely fo backwards compatibility reasons.

However, we realized that we need to optimize this representation to make it more compact and efficient for internal handling in e-Motions. Such new representation is internal to our tool and transparent to users, who still use ePMIF models to describe their QNM models. This metamodel, that we shall refer to as eMotions-ePMIF, is shown below and described here:

  • In ePMIF, transitions are specified for objects Workload (both OpenWorkload and ClosedWorkload) and ServiceRequest. In eMotions-ePMIF, there are no transitions for ClosedWorkload objects because they are not needed. As for OpenWorkload objects, we have made an specialization: OWL1T for OpenWorkloads that always enter the same server and OWLnT for OpenWorkloads when there is more than one server to which transition is possible when the jobs enter the system. The same thing happens with ServiceRequests, where the same specialization has been done. This specialization has been done to improve the efficiency of the rules shown later.
  • In eMotions-ePMIF, we have only one type of ServiceRequest instead of three. It is TServiceRequest (for time service request). It is due to some reasons. Firstly, if we have only one type of objects for service requests, then we need fewer behavioral rules to model the system and, consequently, it is more efficient. Secondly, WorkUnitServiceRequest objects can be directly converted to TimeServiceRequest objects as long as WorkUnitServer objects are converted to Server objects. Finally, we can also convert DemandServiceRequest objects to TimeServiceRequest objects because the service time of the TimeServiceRequest can be obtained dividing the service demand of the DemandServiceRequest by the number of visits. Now, being all service requests of type TServiceRequest, we can see that this class has many attributes. The attributes serviceDistr and serviceParams are those used for service times of jobs. Attributes wklds, tS and aS are using for jobs flow, as explained later in the behavioral rules.
  • There are no WorkUnitServer objects anymore, they are all of type Server now. This is directly correlated to the previous point: if there are no WorkUnitServiceRequests, then we do not need WorkUnitServers (and viceversa).
  • In ClosedWorkloads, we do not specify parameters for think time. The reason is that the thinkDevice of a ClosedWorkload is nothing but a Server, just as the rest of servers. As a consequence, we have considered that the think time (i.e, its probability distribution and parameters) is specified by a TServiceRequest, just like for the other Servers. However, we do have a centralSrv reference to a Server across which all jobs have to flow. Its use is shown later in the behavioral rules.
  • Attributes transitProbs are not of type double anymore but of type integer. They contain the probabilities in a scale from 1 to 10000. They represent the same probabilities as in the ePMIF metamodel [2], but multiplied by 10000. Furthermore, in this new attribute the probability in position n in the sequence contains the value in the position n in the sequence in ePMIF plus the addition of all the previous values (all multiplied by 10000). This representation improves the efficiency of transitions in the simulation.


Image:PMIFeMotions.png

Adding Observers to the structural model

Starting from the metamodel for e-Motions described before, we can design behavioral rules that specify the dynamics of QNMs in terms of jobs flow. Now, we introduce our proposal to also monitor performance indicators for QNMs in our behavioral rules. To calculate the value of these simulation parameters we propose the use of observers. An observer is an object whose purpose is to monitor the state of the system: the state of the objects, of the actions, or both. Observers, as any other objects, have a state and a well-defined behavior. The attributes of the observers capture their state, and are used to store the variables that we want to monitor.

To introduce observers in the behavioral rules, we need to specify a metamodel for them. It is shown below. The idea is to combine both metamodels so that observers can be used in our behavioral rules. In fact, since e-Motions allows users to merge several metamodels in the definition of a DSL behavior, we can define the observers metamodel in a non-intrusive way, i.e., we do not need to modify the system metamodel to add observers in their rule. Furthermore, this approach also enables the reuse of observers across different DSLs.

Image:obsMM.png

In the observers metamodel we can see that there are three types of observers, each of which has been designed to monitor the performance metrics of a different type of object. In this way, we have WorkloadOb for monitoring Workloads, ServerOb to monitor Servers, and ServiceRequestOb to monitor TServiceRequests. All of them have a reference to the class EOject, which specifies that they can refer to any object.

The aim of WorkloadOb observers is to monitor performance properties of workloads. The idea is to associate an observer of this type to each workload. Their attributes are (please note that when we refer to jobs, due to the possibility of having more than one type of workload, we mean jobs belonging to the monitored workload):

  • thoughput. It measures the throughput of the workload. In open workloads, it is calculated as the number of jobs that leave the system per unit of time. In closed workloads, it refers to the number of jobs that complete a cycle within the network per unit of time.
  • respTimeAcc. It is used to calculate the response time of the workload. It stores the addition of response times of all the jobs.
  • respTimeAv. It stores the response time. It is calculated as the average of the response times of all the jobs.
  • jobsAcc. This attribute is used to compute the mean number of jobs in the network.
  • jobsAv. It calculates the mean number of jobs in the network.
  • thrTrace, respTTrace and jobsTrace. They store a trace with the values for throughput, response time and jobs average, respectively, at different times of the simulation. They are used for determining when to stop the simulation.

ServerOb observers deal with the monitoring of servers. They basically monitor the queue length in the attribute lengthQAv, considering the jobs belonging to any kind of workloads that can request service in the server, and keep its trace in the attribute lengthQTrace. Attribute lengthQAcc is used for the calculation of lengthQAv. As explained in [5], the queue length of a server considers the jobs in the queue and the jobs being served.

Each service request in the model will have a ServiceRequestOb observer associated to it. Considering that a service request is the relation among a server and a workload that requests its service, the data monitored by this observer represents the performance relation among them. In this way, when we mention workloads (or jobs belonging to them) and servers in the explanation of the attributes, we mean the workloads (or jobs) and servers associated to the service request:

  • served. It counts the number of jobs processed by the server.

timeBusy. It stores the time the server has been busy processing jobs. utilization. Percentage of the time (between $0$ and $1$) that the server has been busy. waitingTAcc. Used to calculate the average waiting time. It stores the addition of waiting times of all the jobs. waitingTAv. It monitors the average waiting time in the queue for all the jobs processed by the server, where the waiting time of a job is the time that elapses between the job arrives to the queue of the server and it starts being processed. serviceTAcc. Used to calculate the average service time. It stores the addition of service times of all the jobs. serviceTAv. It calculates the average service time of all the jobs processed by the server. residenteTAv. It stores the residence time average of the jobs processed by the server. The residence time of a job is the time elapsed between the job arrives to the queue of the server and the job leaves the server. throughput. It computes the throughput, i.e., the jobs processed by the server per unit of time. utilizTrace, waitTrace, servTrace, residTrace and thrTrace. For the performance metrics described, these attributes keep the traces of their values throughout simulation. They are used to determine when the simulation has to stop.

Behavior

Before explaining how we have modeled the behavior of QNMs by means of in-place rules, please let us make two small clarifications.

In the first place, we have a rule that whenever the trans attribute of OWLnT and SRnT objects has the value -1, it assigns them a random value between 1 and 10000. It is used for the jobs transitions. Secondly, we need to explain how attributes wklds, tS and aS in TServiceRequests are used. These attributes are sequences of integers that are used to model the flow of jobs among servers. For each job present in the network, they store its identifier (in sequence wklds), the time it entered the network (in tS - timeStamp) and the time it arrived to the Server where it currently is (in aS - arrivalServer). In this way, at a certain point in the simulation and at a certain server, we could have, for example, the following sequences for these attributes: wklds: Sequence(2, 4, 1), tS: Sequence(13, 25, 4) and aS: Sequence(21, 25, 32). Attributes tS and aS are used for the calculation of some performance metrics. Please note that the order matters since the position of an element in any of the sequences contains information about a job to which the value in the same position in the other sequences gives information too.

Now let us explain the e-Motions rules that describe the behavior of QNMs.

The first one, shown below, is the EnterOpenWLFnT rule, which models how jobs belonging to a workload of type OWLnT enter the network. It shows how the job can transit to more than one server once in enters the system by means of the Source node. We can see in the figure that in both the LHS and RHS patterns we have the OpenWorkload to which the job belongs, the Server to which the job transits, the ServiceRequest that relates both of them, and the Source node at which jobs belonging to the OpenWorkload enter. We can also see the relations between these objects, according to the metamodel.

There are two variables in the EnterOpenWLFnT rule, duration and pos. The former specifies the duration of the rule. This rule is for OpenWorkloads whose arrival distribution is Poisson. Consequently, the duration of the rule is determined by an Exponential distribution where the parameter is the inverse of the parameter of the Poisson. The parameter of a Poisson determines the average jobs arrival per unit of time. An Exponential with the inverse of the parameter of the Poisson determines the average time elapsed between the arrival of two jobs. If the OpenWorkload has a different probabilistic distribution, we only need to change the value of duration. The latter variable is used for determining the Server where the job transits. It contains the position that the server has in the sequence transitTo of the OWLnT. It is used for the OCL condition needed to trigger the rule. This condition basically states that the Server with which this rule is doing the matching corresponds to the Server to which the job has to transit as specified by the attribute trans of the OWLnT. In the RHS of the rule we see how the trans attribute of the OWLnT is -1 (so that a new random value will be assigned to it) and its jobsIn attribute has been incremented. The attributes of the TServiceRequest are updated so that a new identifer is given to the new job (wklds sequence) and the current time elapse is assigned to sequences tS and aS.

We have a very similar rule, named EnterOpenWLF1T, which contains the same as EnterOpenWLFnT excepting the variable pos and the condition in the LHS of the rule. The rule is used for OpenWorkloads of type OWL1T, i.e., OpenWorkloads that always transit to the same Server when they enter the Source node.


Image:EnterOpenWLFnT.png



The TransitJobsnT rule, shown below, models the transition of jobs among servers (and also from a Server to a Node - of type SinkNode). Jobs can belong to either OpenWorkloads or ClosedWorkloads, so this rule is used for both. Specifically, the TServiceRequest tagged as srS, i.e., the service request of the source server in the transition, is of type SRnT in this rule. It means that jobs have different nodes to transit to. There is a similar rule, named TransitJobs1T, which is for TServiceRequests of type SR1T. The rule is basically the same as TransitJobsnT but without the pos variable and the second condition in the LHS. They are both for determining the Node to which jobs have to transit to, as explained in the EnterOpenWLFnT rule. The duration of the rule is also specified as described before. In the rule shown below, the TServiceRequest follows an Exponential distribution. If it followed a Gamma distribution, for example, then the duration attribute would be as follows, because it has two parameters: Image:gammaDistrQNM.png

In the LHS of the rule we see all the objects needed for this rule to be triggered: the source Server, the target Node (either a Server or a SinkNode), the Workload to which jobs belong, the TServiceRequests associated to the mentioned elements, and the Observers whose attributes will be updated in the RHS of the rule. In the RHS of the rule we see how the three sequences representing the jobs in the source TServiceRequest (srS) are updated with the elimination of the corresponding jobs, and these sequences in the target TServiceRequest (srT) are updated too with the addition of the same jobs. The helpers used are shown after the figure with the rule. More things that change in the RHS part are:

  • The trans attribute of srS is set to -1 so a new random value will be given to it for the next triggering of the rule.
  • The attributes of the observer of the source TServiceRequest are updated as it corresponds. Basically, the updating consists on calculating the new value for the attributes taking into account the previous value and the jobs that are being processed in the rule. In this way:
    • waitingTAcc and serviceTAcc attributes update their values with the waiting time and the service time of the jobs processed, respectively.
    • Attributes waitingTAv and serviceTAv are updated using those values.
    • The value of the throughput is calculated with the new total number of jobs served and the current time elapse.
    • The timeBusy attribute is updated with the duration of the rule plus its old value, and the utilization is updated with these and the current time elapse. The residenceTAv is also updated.
  • Regarding the observer associated with the source Server (sOb), their attributes to update the average queue length use the helper lengthQA.


Image:TransitJobsnT2.png


Image:HelpersQNMs2.png


Although the code may look a bit verbose, the calculation of these performance parameters is quite straightforward. In this way, the mean waiting time is calculated as the sum of waiting times divided by the number of jobs that received service. In the same way, the mean service time is the total service time divided by the number of jobs that completed service. Same thing to calculate the mean residence time: it is the total residence time divided by the number of jobs that completed service. As for the mean queue length, we compute the ratio over time and not over the events, because we can get the time elapsed at any point in the simulation.

Continuing with the explanation of the TransitJobsnT and TransitJobs1T rules, their NACs forbid the application of the rules if the source TServiceRequest (srS) is already participating in an action of this type. The scheduling policy of the source Server is FCFS (First Come First Served). Basically, the number of jobs that are processed by the rule is the minimum value between the quantity attribute of the source Server and the number of jobs in the queue. If the scheduling policy is IS (Infinite Server), the rule is very similar, with the only need to change the quantity of jobs that are processed. Thus, all the jobs in the queue are processed.

The TransitJobsnT rule depicted in the figure is used when there is only one kind of Workload in our model. If there are more than one Workload, we need to add the NAC shown below, so that the job that arrived first to the source Server is the one processed. In fact, the NAC forbids the rule triggering if there is a different TServiceRequest from the one in the rule matching and that is pointing to the same source Server and Workload and whose first job in the queue arrived earlier than the first job in the TServiceRequest of the matching.

Image:NAC.png

The instantaneous ExitOpenWLF rule, shown below, models the exit of jobs from the network. Consequently, it is applied only over OpenWorkloads. The rule is instantaneous. The reason why there is a reference from TServiceRequests to Nodes (instead of only to Servers) in our metamodel is to associate a TServiceRequest to the SinkNode. In this way, when this TServiceRequest contains jobs, this rule is fired and the corresponding attributes in the OpenWorkload are updated.

  • The jobsOut attribute of the OpenWorkload is updated with the helper jobsO (the helpers used in this rule are shown after the figure of the rule).
  • The throughput of the observer is updated using the jobsOut value.
  • The mean response time is calculated in the respTimeAv attribute and uses the value of respTimeAcc.
  • The respTimeAcc calls the helper ResponseTWL.
  • The mean number of jobs is calculated following the same approach as the mean queue length explained before. Thus, we take into account the jobs exiting the network and how long they have stayed within this. Its calculation is done with the jobsA helper.
  • The jobs in the TServiceRequest are deleted, since they have exited the network.
  • Regarding the thrTrace, RespTTrace and jobsTrace attributes, the new value calculated for throughput, mean response time and jobs average is appended to each of the sequences, respectively. In this way, a new value is added to these sequences every time a job (or more than one jobs) leave the system.


Image:ExitOpenWLF.png

Image:HelpersQNMs3.png



There is a similar rule to the ExistOpenWLF one to update the attributes of observers associated to ClosedWorkloads. The attributes are the same excepting the one for the mean number of jobs, which is no longer necessary. In that rule, instead of having the SinkNode of the OpenWorkload, we have the centralSrv associated to the ClosedWorkload.

Finally, we have another rule, UpdateTraces, to update the attributes for traces in the other observers. They are updated either when jobs leave the system (in OpenWorkloads) or when jobs arrive to the centralSrv (for ClosedWorkloads). The rule is instantaneous. The rules that update the traces in the open networks is shown here:

Image:UpdateTraces.png

References

[1] Javier Troya, Antonio Vallecillo. "A Domain-Specific Language to Specify and Simulate Queuing Network Models", Computer Standards & Interfaces, Volume 36, 5 (2014), pp. 863-879

[2] xQNM: a Domain-Specific Language for the specification and simulation of Queuing Network Models (link), 2012

[3] The e-Motions tool, 2011

[4] Smith, Connie U. and Lladó, Catalina M. and Puigjaner, Ramon: "Performance Model Interchange Format (PMIF 2): a comprehensive approach to Queueing Network Model interoperability". Performance Evaluation 67(7) (2010) 548 - 568

[5] Jain, R.: "The Art of Computer Systems Performance Analysis: Techniques for Experimental Design, Measurement, Simulation, and Modeling". Wiley (1991)

Personal tools