Wednesday 8 July 2015

smu assignment of MCA 3rd sam ASSIGNMENT OF ADAVNCED COMPUTER NETWORKS

ASSIGNMENT OF ADAVNCED COMPUTER NETWORKS

Question No 1 Differentiate between Physical addresses and Logical addresses.
Answer  
Physical addresses-

This address is also known as hardware address, link address or MAC (Medium Access Control) address. Basically manufacturer of network interface card (NIC) assign this address and store into its hardware. Usually this address encodes the manufacturer's registered identification number and may be mentioned to as the burned-in address. Data-link layer incorporate this address into the frame, so it is lowest level address. This addresses have authority over the network (LAN or WAN). Depending on the network the size and format of the address vary. For example, Ethernet, which is a very popular LAN technology, uses 6 byte (48 bit) physical address. Likewise LocalTalk (apple) has a 1-
byte dynamic address that changes each time the station comes up.
Ethernet physical address written as 12 hexadecimal digits; every byte (2 hexadecimal digits) is separated by a colon, as shown in below. 07:12:02:01:2C:4B
A 6 byte (12 hexadecimal digits) physical address

Logical addresses

If we want to communicate universally, we need an address, which will be independent of original physical network. Physical address will not be enough to support universal communication. So a universal addressing system is required in which, each host can be identified uniquely. This universal address provided by Internet service Provider (ISP) is known as logical address or IP (Internet protocol) address.
Already IPv6 usage is in practise but commonly we are using a 32 bit logical address. Example of a sample logical address is 198 : 168 : 10 : 254. But this 32 bit logical address is not sufficient for huge number of Internet users and results depletion of available addresses. So, very soon we are going to use a 128 bit logical address. This will provide enormous number of logical address to support the universal communication.

Question No 2 Describe about DWDM. Explain the components of a basic DWDM system
ANSWER
Dense Wavelength Division Multiplexing (DWDM) -
 It is conceptually similar to frequency division multiplexing. Only difference is, it is purposefully designed for fibre-optic cable to support high data-rate. In the literature, the term dense WDM (DWDM) is often used. This term does not imply a different technology to that used for WDM. In fact, the two terms are used interchangeably. WDM, DWDM are based on the same concept of using multiple wavelengths of light on a single fiber, but differ in the spacing of the wavelengths, number of channels, and the ability Advanced Computer Networks
to amplify the multiplexed signals in the optical space. Strictly speaking, DWDM refers to the wavelength spacing proposed in the ITU-T G.692 standard. The term DWDM implies the use of more channels, more closely spaced, than ordinary WDM. In general, a channel spacing of 200 GHz or less could be considered dense.
DWDM refers originally to optical signals multiplexed within the 1550 nm band so as to control the capabilities of erbium doped fiber amplifiers (EDFAs), which are effective for wavelengths between approximately 1525–1565 nm, or 1570–1610 nm. EDFAs were originally developed to replace SONET/SDH optical-electrical-optical (OEO) regenerators, which they have made practically obsolete. EDFAs can amplify any optical signal in their operating range, regardless of the modulated bit rate.



 some important components of a basic DWDM system.

A DWDM terminal multiplexer: It actually contains one wavelength converting transponder for each wavelength signal it will carry. The wavelength converting transponders receive the input optical signal (i.e., from a client-layer SONET/SDH or other signal), convert that signal into the electrical domain, and retransmit the signal using a 1550 nm band laser.
An intermediate line amplifier: It compensates the loss in optical power, while the signal travels along the fiber.
An intermediate optical terminal or optical add-drop multiplexer: This is a remote amplification site that amplifies the multi-wavelength signal that may have traversed up to 140 km or more before reaching the remote site.
A DWDM terminal demultiplexer: The terminal demultiplexer breaks the multi-wavelength signal back into individual signals and outputs them on separate fibers for client-layer systems (such as SONET/SDH) to detect.
Optical Supervisory Channel (OSC): This is an additional wavelength usually outside the EDFA amplification band. It carries information about the multi-wavelength optical signal as well as remote conditions at the optical terminal or EDFA site.

Question No 3. Describe about Peak cell rate (PCR) and Sustained cell rate (SCR).
Peak cell rate (PCR)
                                                      
This is the maximum amount of traffic that can be submitted by a source to an ATM network, and is expressed as ATM cells per second. Due to the fact the transmission speeds are expressed in bits per second. It is more convenient to talk about the peak bit rate of a source, i.e., the maximum number of bits per second submitted to an ATM connection, rather than its peak cell rate. The peak bit rate can be translated to the peak cell rate, and vice versa, if we know which ATM adaptation layer is used. The peak cell rate was standardized by both the ITU-T and the ATM Forum.

Sustained cell rate (SCR)

In ATM Forum the average cell rate was not standardized by ITU-T. Rather an upper bound of the standard cell rate was standardized by ATM Forum is known as sustained cell rate (SCR). This is obtained as follows.
Let us first calculate the average number of cells submitted by the source over successive short periods T. For instance, if the source transmits for a period D equal to 30 minutes and T is equal to one second, then there are 1800 T periods and we will obtain 1800 averages (one per period). The largest of all of these averages is called the sustained cell rate (SCR). Note that the SCR of a source cannot be larger than the source’s PCR, nor can it be less than the source’s average cell rate.


               
Question No 4.Describe the following:
      a) Open Shortest Path First (OSPF) protocol
      b) Broader Gateway Routing Protocol (BGP)
Open Shortest Path First (OSPF) protocol
OSPF is a commonly deployed link-state routing protocol. It views the Internet as a star configuration. OSPF incorporates the following concepts to manage the complexity of large internets.
Area This is a collection of contiguous networks and hosts, and incorporates routers that have interfaces to any of the included networks.
Backbone This is a contiguous collection of networks not included in any area, their attached routers and the routers belonging to many areas.

A separate copy of the link-state routing algorithm is run by each area. The information about the link-state is broadcasted only to the routers in the same area. Hence, the amount of OSPF traffic is considerably reduced.
Format of an OSPF Packet
An OSPF packet is sent as an IP packet’s payload. The IP packet that contains an OSPF packet has a standard multicast IP address of 224.0.0.5 on point-to-point or broadcast networks. But specific IP destination addresses are used on non-broadcast networks. These addresses are configured into the router beforehand.
The OSPF packet has a 24-bit header. Figure depicts the format of the OSPF packet header.

Broader Gateway Routing Protocol (BGP)

Routing involves identification of optimal routing paths and the transportation of information groups (packets) through an internetwork. Although the transportation of packets through an internetwork is not complex, identification of optimal paths is very complex. A protocol that we can use for path determination in networks is the Border Gateway Protocol (BGP). The current version of BGP is BGP-4.
BGP implements inter-domain routing in TCP/IP networks. BGP is an exterior gateway protocol (EGP) as it carries out routing between multiple ASs or domains. It exchanges routing and reachability information.

BGP systems. BGP has replaced the Exterior Gateway Protocol (EGP) as the standard exterior gateway-routing protocol for the global Internet.

Question No 5 Write short notes on:
a) Cryptography
b) Encryption
c) Decryption
d) Cryptanalysis
e) Cryptology
Cryptography
Most initial computer applications had no or at best, very little security. This continued for a number of years and until the importance of data is truly realized. Until then, computer data was considered to be useful, but not something to be protected. When computer applications were developed to handle financial and personal data, the real need for security was felt. People realized that data on computers was an extremely important aspect of modern life. Therefore, various areas in security began to gain prominence. Two typical examples of such security mechanism were as follows:
Provide a user id and password to every user and use that information to authenticate a user.
Encode information stored in the databases in some fashion so that it is not visible to users who do not have the right permissions.
Let us discuss some important term used in this security context. It is the art and science of achieving security by encoding messages to make them non-readable.

Encryption:
Encryption is the process of encoding a message (plain text) to cipher text so that an unauthorized person cannot access it. The reverse process of transforming cipher text (encrypted message) messages back to plain text (original text) messages.

·         Decryption:
Alternatively, the terms encode and decode or encipher and decipher are used instead of encrypt and decrypt. That is, we say that we encode, encrypt, or encipher the original message to hide its meaning. Then, we decode, decrypt, or decipher it to reveal the original message. A system for encryption and decryption is called a cryptosystem.
The original form of a message is known as plaintext, and the encrypted form is called cipher text. This relationship is shown in Figure 10.1. For convenience in explanation, we denote a plaintext message P as a sequence of individual characters P = <p1, p2, ..., pn>. Similarly, cipher text is written as C = <c1, c2, ..., cm>. For instance, the plaintext message "sikkim manipal university" can be thought of as the message string <s,i,k,k,i,m, ,m,a,n,i,p,a,l, ,u,n,i,v,e,r,s,i,t,y>. It may be transformed into cipher text <c1, c2, ..., c14>, and the encryption algorithm tells us how the transformation is done.
·         Cryptanalysis:
 This is the technique of decoding messages from a non-readable format back to readable format without knowing how they were initially converted from readable format to non-readable format. This is the responsibility of a cryptanalyst. A cryptanalyst can do any or all of six different things:
·         to break a single message
·         an attempt to recognize patterns in encrypted messages, to be able to break subsequent ones by applying a straightforward decryption algorithm
·         an attempt to infer some meaning without even breaking the encryption, such as noticing an unusual frequency of communication or determining something by whether the communication was short or long
·         an attempt to deduce the key, in order to break subsequent messages easily
·         an attempt to find weaknesses in the implementation or environment of use of encryption
·         an attempt to find general weaknesses in an encryption algorithm, without necessarily having intercepted any messages.

·         Cryptology:
 It is a combination of cryptography and cryptanalysis.
Here two main aspects of encryption and decryption process are the algorithm and the key used for encryption and decryption. To understand this better, let us take the example of a combination lock, which we use in real life. We need to remember the combination of number needed to open up the lock. The facts that it is a combination lock and how to open it are pieces of public knowledge. However the actual value of the key required for opening a specific lock (key) is kept secret.
In general the algorithm used for encryption and decryption processes is usually known to everybody. However, it is the key used for encryption and decryption that makes the process of cryptography secure. Broadly, there are two cryptographic mechanisms, depending on what keys are used. If the same key is used for encryption and decryption.
Question No 6 Differentiate between Single server queue and Multi-server queue
Single server queue –

The important element in a single server queue is the server, which provides services to packets. Different packets arrive at the system to be served. If the server is idle at the time of packet arrival, the packet is served immediately. Otherwise arriving packets enter into a waiting queue. When the server completes serving a packet, the served packets depart from the server. If there are packets waiting in the queue, one packet immediately enters into the server. Figure 12.1 depicts the single server queue.
The following assumptions are made in single server queues
The rate of arrival of packets is Poisson.
The dispatching packets are not prioritized based on service times. Advanced Computer Networks
Packets arrive at the queuing system at some average rate λ. At any given time, some packets are waiting in the queue. The average number of packets waiting in the queue is w and the mean time that a packet must wait is Tw.
The mean time Tw includes the waiting time of all the packets including those that do not wait at all. The server processes incoming packets with an average service time Ts. Ts is the time interval between the entering of a packet into the server and the departure of that packet from the server.
Utilization ρ is the duration for which the server is busy. The average number of packets ‘r’ that stays in the system includes the packets that are being served and the packets waiting in the queue. The average time a packet spends in the system including both the waiting and serving time is Tr.
If the capacity of the single server queuing system is infinite, then the system does not lose any packets. But there may be delay in serving the packets. As the arrival rate increases, the utilization and congestion also increases. This increases the queue length and waiting time. At ρ = 1, the server attains saturation. The theoretical maximum arrival rate that the system can serve is given by the equation Eq. 12.14.
Standard deviation formulas assume first-in, first-out dispatching.
No packets are discarded from the queue.

Multi-Server Queue -

In a multi-server queuing system, multiple servers share a common queue. If a packet arrives and at least one server is available, then the packet is immediately sent to the available server. If no server is available,

arriving packets form a queue. When a server becomes free, a request is sent to that server. Figure 12.2 depicts a multi-server queue.
All the parameters shown in figure 12.1 apply to multi-sever queue. If a multi-server queue includes N identical servers, then ρ is the utilization of each server. Nρ is considered to be the utilization of the multi-sever queuing system and this term is referred as traffic intensity denoted by u. Thus, the maximum input rate for a multi-server queue can be calculated by using the equation.
Requests arrive to the buffer input at random time instances. If the buffer is empty and the server is free when a new request arrives, the request is immediately passed to the server, serving time is also random.
If at request arrival the buffer is empty but the server is busy with the previous request, the arrived request must wait in the buffer until the server becomes available. As soon as the server completes the previous request, the request is passed to the output and server retrieves the next request from the buffer. Requests leaving the server from the output flow. The buffer is infinite, which means that requests never get lost because of the buffer overflow. If the newly arrived request finds that the buffer is not empty, it is placed into the queue and waits for servicing. Requests are retrieved from the queue according to the order in which they arrived – that is according to the First In, First Out (FIFO) servicing order.
Queuing theory allows the evaluation of an average queue length and an average waiting time for this model, depending on the characteristics of the input flow and servicing times.


------------------------------------------------------------------------------------------------------------------------------------------

6 comments: