Notices
Notice: Exam Form BE IV/II & BAR V/II (Back) for 2076 Magh
Routine: BE IV/II & BAR V/II - 2076 Magh
Result: BCE I/II exam held on 2076 Bhadra
Result: All (except BCE & BEI) I/II exam held on 2076 Bhadra
Notice: Exam Center for Barrier Exam (2076 Poush), 1st Part
View All
View Old Questions
Computer Engineering(BCT)
Electrical Engineering(BEL)
Electronics and Communication(BEX)
View All
View Syllabus
Computer Engineering(BCT)
Electrical Engineering(BEL)
Electronics and Communication(BEX)
View All

Notes of Simulation and Modelling [CT 753]

Simulation of computer systems

 

Simulation tools

Level of Abstraction in Computer System

- Computer system have complex time scale behaviour from time to flipping transistor’s state to time for human interaction.
- It is designed hierarchically.
- The high level of abstraction is system level. In this level, one can view computational activity in terms of tasks circulating among servers, queuing for service when a server is busy.
- Below it is Processor level in which one can view components of the processor used.
- Below it is CPU level in which one can view the activity of functional units that together make up a central processing unit.
- The lowest level is Gate level in which one can view the logical circuitry that is responsible for all the computations carried out by the computer system.
- Simulation is used in each level and the results of one level is used by another level.

IMG_20170810_125439


Simulation Tools

- Simulation tools are the tools that are used to perform and evaluate simulations at different abstraction levels of computer system.
- An important characteristic of a tool is how it supports model building.
- The tools commonly used for simulation are:
1. CPU network simulation (Queueing network, Petri net simulators)
2. Processor simulation (VHDL)
3. Memory simulation (VHDL)
4. ALU simulation (VHDL)
5. Logic network simulation (VHDL)


Process Oriented Approach

- It implies that the tool must support separately schedulable threads of control.
- It allows continuous description with suspensions.


Event Oriented Approach

- It implies that the tool must support model description.
- It does not allows continuous description with suspensions.


High Level computer - system simulation

Problem Definition:

Consider a company provides a website for searching and links to sites for certain facilities. At the back end, there is data servers that handles specific queries and updates databases. Data servers receive requests for service from application servers. At front end, there is web servers that manage interaction of applications with the WWW. The whole system is connected with the users through the router. Let us consider that we need to study site’s ability to handle load at peak periods i.e desired output is empirical distribution of the access response time.

Now, for this we need to focus on impact of timing at each level, factors that affect timing and effects of timing on contention for resources for designing this high level simulation model.

IMG_20170810_125503


Simulation Model:

- All entries into the system are through dedicated router. It examines the request and forwards it to some web server. It takes some time to decide whether the request is a new request or part of ongoing session. One switching time is assumed for a pre-existing request and different time for a new request. It outputs the web server selection and enqueues request for service to the web server.
- Web server consists of one queue for new requests, one for suspended requests that are waiting for response from application server and one for requests that are ready to process response from application server. It is assumed that web server has enough memory to handle all the requests. It also has queuing policy. Associated application server is identified for each new requests. A request for service is formatted and forwarded to the application server and the request joins the suspended queue.
- Application server organizes the request for services. The new request for service joins the new-request queue. An application request is modeled as a sequence of sets of requests (organized in a burst) from data servers. It is assumed that all data requests from a burst must be satisfied before next burst computation begins. For each application, a list of ready to execute and a list of suspended threads are maintained.
- Data servers create a new thread to respond to data request and places it in a queue of ready threads. When service is received, the thread requests data from a disk and then places in a suspended queue.
- Disk completes its operation for data request and the the thread in data server on receiving response from disk moves to ready list and reports back to application server associated with the request.
- The thread suspended at application server responds and finishes; then reports its completion to the web server.
- The thread in web server that initiates that request then communicates the results back to the Internet.


Response Time Analysis

- Query-response-time distribution is estimated by measuring between the time at which a request first hits the router and the time at which web server thread communicates the result.
- The system can be analyzed by measuring behaviour at each server of each type.
- To assess system capacity at peak loads, we would simulate to identify bottlenecks, then look to see how to reduce load at bottleneck devices by changing various settings of simulation like scheduling policy, queue discipline and so on.


CPU simulation

- In CPU simulation, we focus on discovering execution time.
- For CPU simulation, the input is the stream of instructions and the simulation must model the logical design on what happens in response to the instruction stream.


Problem Definition of ILP (Instruction Level Parallelism) CPU:

The stages in an ILP CPU are as follows:
1. Instruction fetch - The instruction is fetched from memory.
2. Instruction decode - The memory word holding the instruction is interpreted to discover operations to be performed and registers involved.
3. Instruction Issue - An instruction is issued if no constraints hold it back from being executed.
4. Instruction Execute - The instruction operation is performed.
5. Instruction Complete - The results of instruction are stored in the destination register.
6. Instruction Graduate - Executed instructions are graduated in the order that they appear in the instruction stream.

- ILP design allows multiple instructions to be represented in some stages. This may cause execution of instruction out of order. The instruction graduate phase will reorder all the executed instructions.


Simulation Model of ILP (Instruction Level Parallelism) CPU:

- Instruction fetch interacts with the simulated memory system if present. If memory system is present, it can look into an instruction cache for the next referenced instruction, stalling if a miss is suffered. This stage makes instruction in the CPU’s list of active instructions.
- Instruction Decode stage places an instruction in the list. A logical register that appears as the target of an operation is assigned a physical register. Registers used as operand are assigned physical registers that define their values. Branch instructions are identified and outcomes are predicted. Resources for the instruction execution are committed.
- Instruction Issue stage issue an decoded instruction for execution if values in its input registers are available and a functional unit needed to perform the instruction is available. It can be achieved by marking the registers and functional units as busy or pending. After the state is changed, the instruction waiting for that register or functional unit is reconsidered for issue.
- Instruction execute stage computes the result specified by the instruction. It means the actual operation intended by the instruction is performed.
- Instruction complete stage deposits the result into a register or memory as specified in the instruction.
- Instruction graduate reorders the completed instruction in the same order as instruction stream. This is simulated by knowing the sequence number of the next instruction to be graduated.


Example----

Consider a hypothetical computer with following instructions:
Load $2, $6;
Mult $5, 2;
Add $4, $2;
Add $5, $2;

Assume 4 cycles needed to load register, addition takes 1 cycle to complete and multiplication takes 2 cycles to complete.

 

Instruction/Cycle 1 2 3 4 5 6 7 8 9 10 11 12 13 14
I1 Fetch Decode Issue Execute Stall Stall Stall Stall Complete Graduate        
I2   Fetch Decode Issue Execute Stall Complete Stall Stall Stall Graduate      
I3     Fetch Decode Issue Execute Complete Stall Stall Stall Stall Graduate    
I4       Fetch Decode Stall Stall Stall Stall Issue Execute Complete Graduate  

 


Memory Simulation

Memory is arranged hierarchically with L1 cache, L2 cache, main memory and disks.


Example -- Cache Simulation

- The input is cache parameters and memory access tree.
- The output of simulation is cache hit rate.


Simulation Model

- Maintain cache directory and LRU status of the lines within the set.
- When an access is made, update LRU status.
- If a hit, record it as such.
- If a miss, update the contents of the directory.
- Cache directory is implemented as an array, with array entries corresponding to directory entries.


Sponsored Ads