next up previous contents index
Next: Appendix: Building Projects using Up: Client-Server System Previous: Buffer   Contents   Index

Performance Analysis

The procedure for constructing the coupled model EF and CS is omitted here because it is quite straight forward and its schematics were shown in Figure 4.8.

All atomic models' time units were set as TimeUnit.Sec. And Generator used here was autonomous and it's inter-generating job time was a random number of the exponential pdf with mean 5; the time period at Busy of Server was constant 10; Buffer's sending delay time was 2.

We will analyze change of performance indices by varying the number of servers. The number of servers used in CS can be varied by passing different numbers $ n$ with the following static function defined in Ex_ClientServer/Program_CS.cs.

    static Coupled MakeTotalClientServerSystem(int n)

The simulation settings we use here are: the simulation ending time=10,000 second; no display of continuously increasing $ t_e$ , the scale factor is maximum, in which the clock jumps to the next event time; and there is no display of discrete event transitions. The following code shows the case where the number of servers is 5.

    static void Main(string[] args)
    {
        Coupled Sys = MakeTotalClientServerSystem(5) ;
        Sys.PrintCouplings();

        SRTEngine simEngine = new SRTEngine(Sys, 10000, null); //
        simEngine.SetAnimationFlag(false);
        simEngine.SetTimeScale(double.MaxValue); //
        simEngine.Set_dtmode(SRTEngine.PrintStateMode.P_NONE, false);
        simEngine.RunConsoleMenu();
    }

Let's change $ n$ sequentially from 1 to 5, and build the various system models, and try mrun 20 for each configuration. After completion of mrun 20, DEVS# summarizes the performance indices to the console. 5.2 Table 4.1 shows performance indices for each configuration and Figure 4.10s show the trend of performance changes as $ n$ changes.

Average Queue Length and Average System Time are drastically reduced until $ n$ reaches 3. Average Throughput increase up to 0.2 jobs/sec at $ n$ =3 and then there is no increase at $ n$ =4 and 5. The reason why Throughput doesn't increase after $ n$ =3 might be that there is lack of client arrival from outside the system. We can find a similar phenomenon in Utilization which doesn't decrease when $ n$ =2 but starts to decrease when $ n$ =3.

Another interesting trend is that both utilizations at $ n$ =1 and $ n=2$ are equal to about 80%, not 100%, even though Average Queue Length is 589 and 173 and Average System Time is 2,927.33 and 873.86 sec, respectively. The reason seems to be caused by Buffer::tau(SendTo)=2. Server's $ P(C=$ Idle) is about 0.2, which makes sense when considering Server::tau(Busy)=10. In other words, except for the client transmission time from Buffer to Server, Server keeps working all the time.


Table 4.1: Performance Indices for each $ n=i$ of Servers
Performance Indices $ n$ =1 $ n$ =2 $ n$ =3 $ n$ =4 $ n$ =5
Queue Length 589.00 173.79 1.65 0.71 0.58
System Time (sec.) 2,927.33 873.86 18.30 13.54 12.86
Throughput (jobs/sec.) 0.08 0.17 0.20 0.20 0.20
Utilization 0.83 0.83 0.67 0.50 0.40

The simulation run time was $ t_o$ = 10,000 seconds; Utilization is measured by the average utilization of all servers for $ 2\le
n$ . For example, Utilization when $ n$ =3 means $ \sum_{i=1,2,3}$ Utilization(i)/3.

Figure 4.10: Performance Indices
\begin{figure}\centering\mbox {\epsfig{file=Graphs,width=1.1\columnwidth}}
\end{figure}

The following screen shot illustrates the average value and its 95% confidence interval for each statistical item listed where the number of servers is 5. We can find uneven utilizations in this screen shot. For example, $ P(C=$ Busy)=0.61 for SV0 server, while $ P(C=$ Busy)=0.17 for SV4 server. This phenomenon is caused by the searching order in the function Buffer::Matched() in which checking for the availability of servers starts from 0 index all the time. We may need to modify the searching order if we want to utilize the servers more evenly.

Note that in order to have a confidence interval for mu, you must have run a large number of simulations [Zei76,LK91]. It would help the analyst to know how many simulations were run to produce these results.

    ...
    ============= Total Performance Indices =========
    CSsystem.CS.BF
    Average Q length: : 0.567, 95% CI: [0.562, 0.573]

    CSsystem.CS.SV0
    Idle: 0.385, 95% CI: [0.382, 0.388]
    Busy: 0.615, 95% CI: [0.612, 0.618]

    CSsystem.CS.SV1
    Idle: 0.471, 95% CI: [0.468, 0.474]
    Busy: 0.529, 95% CI: [0.526, 0.532]

    CSsystem.CS.SV2
    Idle: 0.584, 95% CI: [0.581, 0.587]
    Busy: 0.416, 95% CI: [0.413, 0.419]

    CSsystem.CS.SV3
    Idle: 0.713, 95% CI: [0.707, 0.718]
    Busy: 0.287, 95% CI: [0.282, 0.293]

    CSsystem.CS.SV4
    Idle: 0.835, 95% CI: [0.830, 0.839]
    Busy: 0.165, 95% CI: [0.161, 0.170]

    CSsystem.EF.Trans
    Average System Time of Job1 (Sec): 12.817, 95% CI: [12.791, 12.843]
    # Job1 went through the system during 10000.00 Secs: 2011.500, 95% CI: [2000.585
    , 2022.415]
    Average Throughput of Job1 per Sec: 0.201, 95% CI: [0.200, 0.202]

    ========== Simulation Run Completed! ==========


next up previous contents index
Next: Appendix: Building Projects using Up: Client-Server System Previous: Buffer   Contents   Index
MHHwang 2007-05-08