ASSIGNMENT OF : MCA2010- OPERATING
SYSTEM
Question No 1. Differentiate between
Distributed Systems and Real-time Systems
DISTRUBATED OPERATING SYSTEM:
A distributed OS provides the essential services and
functionality required of an OS, adding attributes and particular
configurations to allow it to support additional requirements such as increased
scale and availability. To a user, a distributed OS works in a manner similar
to a single-node, monolithic operating system. That is, although it consists of
multiple nodes, it appears to users and applications as a single-node.
Separating minimal system-level functionality from
additional user-level modular services provides a “separation of mechanism and
policy.” Mechanism and policy can be simply interpreted as "how something
is done" versus "why something is done," respectively. This
separation increases flexibility and scalability.
REAL TIME OPERATING SYSTEM:
A real-time operating system (RTOS) is an operating
system (OS) intended to serve real-time application process data as it comes
in, typically without buffering delays. Processing time requirements (including
any OS delay) are measured in tenths of seconds or shorter.
A key characteristic of an RTOS is the level of its
consistency concerning the amount of time it takes to accept and complete an
application's task; the variability is jitter.[1] A hard real-time operating
system has less jitter than a soft real-time operating system. The chief design
goal is not high throughput, but rather a guarantee of a soft or hard
performance category. An RTOS that can usually or generally meet a deadline is
a soft real-time OS, but if it can meet a deadline deterministically it is a
hard real-time OS.[2]
An RTOS has an advanced algorithm for scheduling. Scheduler
flexibility enables a wider, computer-system orchestration of process
priorities, but a real-time OS is more frequently dedicated to a narrow set of
applications. Key factors in a real-time OS are minimal interrupt latency and
minimal thread switching latency; a real-time OS is valued more for how quickly
or how predictably it can respond than for the amount of work it can perform in
a given period of time.
Question No
2. Explain the different process states
Process State
A Process state is executed sequentialy, one
instruction at a time. A program is a passive enitiy. Example a file on the
disk. A process on the other hand is an active entity in addition to program
code it includes the values of the program counter the contents of the CPU
registers the global variables in the data section and the contents of the
stack that is used for subroutine calls. In reality the CPU switches back and
forth among processes.
1.
New
The process is being created.
The process is being created.
2.
Ready
The process is waiting to be assigned to a processor. Ready processes are waiting to have the processor allocated to them by the operating system so that they can
The process is waiting to be assigned to a processor. Ready processes are waiting to have the processor allocated to them by the operating system so that they can
3.
Running
Process instructions are being executed (i.e. The process that is currently being executed).
Process instructions are being executed (i.e. The process that is currently being executed).
4.
Waiting
The process is waiting for some event to occur (such as the completion of an I/O operation).
The process is waiting for some event to occur (such as the completion of an I/O operation).
5.
Terminated
The process has finished execution
The process has finished execution
Process
Control Block, PCB
Each process is
represented in the operating system by a process control block (PCB) also
called a task control block. PCB is the data structure used by the operating
system. Operating system groups all information that needs about particular
process.
PCB contains
many pieces of information associated with a specific process which are
described below.
1
|
Pointer
Pointer points to another process control block. Pointer is used for maintaining the scheduling list. |
2
|
Process State
Process state may be new, ready, running, waiting and so on. |
3
|
Program
Counter
Program Counter indicates the address of the next instruction to be executed for this process. |
4
|
CPU registers
CPU registers include general purpose register, stack pointers, index registers and accumulators etc. number of register and type of register totally depends upon the computer architecture. |
5
|
Memory
management information
This information may include the value of base and limit registers, the page tables, or the segment tables depending on the memory system used by the operating system. This information is useful for deallocating the memory when the process terminates. |
6
|
Accounting
information
This information includes the amount of CPU and real time used, time limits, job or process numbers, account numbers etc. |
Question No 3. Define Deadlock.
Explain necessary conditions for deadlock
DEADLOCK:
A deadlock is a situation in which two or more
competing actions are each waiting for the other to finish, and thus neither
ever does.
In a transactional database, a deadlock happens when
two processes each within its own transaction updates two rows of information
but in the opposite order. For example, process A updates row 1 then row 2 in
the exact timeframe that process B updates row 2 then row 1. Process A can't
finish updating row 2 until process B is finished, but process B cannot finish
updating row 1 until process A is finished. No matter how much time is allowed
to pass, this situation will never resolve itself and because of this database
management systems will typically kill the transaction of the process that has
done the least amount of work.
In an operating system, a deadlock is a situation which
occurs when a process or thread enters a waiting state because a resource
requested is being held by another waiting process, which in turn is waiting
for another resource. If a process is unable to change its state indefinitely
because the resources requested by it are being used by another waiting
process, then the system is said to be in a deadlock.[1]
Deadlock is a common problem in multiprocessing
systems, parallel computing and distributed systems, where software and
hardware locks are used to handle shared resources and implement process
synchronization.[2]
In telecommunication systems, deadlocks occur mainly
due to lost or corrupt signals instead of resource contention.
Deadlock Conditions:
mutual exclusion
The resources involved must be unshareable; otherwise,
the processes would not be prevented from using the resource when necessary.
hold and wait or partial allocation
The processes must hold the resources they have already
been allocated while waiting for other (requested) resources. If the process
had to release its resources when a new resource or resources were requested,
deadlock could not occur because the process would not prevent others from
using resources that it controlled.
no pre-emption
The processes must not have resources taken away while
that resource is being used. Otherwise, deadlock could not occur since the
operating system could simply take enough resources from running processes to
enable any process to finish.
resource waiting or circular wait
A circular chain of processes, with each process
holding resources which are currently being requested by the next process in
the chain, cannot exist. If it does, the cycle theorem (which states that
"a cycle in the resource graph is necessary for deadlock to occur")
indicated that deadlock could occur.
Question No 4. Differentiate between Sequential access and Direct access
methods.
SEQUENTIAL ACCESS:
sequential access means that a group of elements (such
as data in a memory array or a disk file or on magnetic tape data storage) is
accessed in a predetermined, ordered sequence. Sequential access is sometimes
the only way of accessing the data, for example if it is on a tape. It may also
be the access method of choice, for example if all that is wanted is to process
a sequence of data elements in order.
However, there is no consistent definition of
sequential access or sequentiality.[2][3][4][5][6][7][8][9] In fact, different
sequentiality definitions can lead to different sequentiality quantification
results. In spatial dimension, request size, strided distance, backward
accesses, re-accesses can affect sequentiality. For temporal sequentiality,
characteristics such as multi-stream and inter-arrival time threshold has
impact on the definition of sequentiality.[10]
In data structures, a data structure is said to have
sequential access if one can only visit the values it contains in one
particular order. The canonical example is the linked list. Indexing into a
list that has sequential access requires O(k) time, where k is the index. As a
result, many algorithms such as quicksort and binary search degenerate into bad
algorithms that are even less efficient than their naïve alternatives; these
algorithms are impractical without random access. On the other hand, some
algorithms, typically those that do not have index, require only sequential
access, such as mergesort, and face no penalty.
DIRECT ACCESS:
DirectAccess, also known as Unified Remote Access, is a
VPN-like technology that provides intranet connectivity to client computers
when they are connected to the Internet. Unlike many traditional VPN
connections, which must be initiated and terminated by explicit user action,
DirectAccess connections are designed to connect automatically as soon as the
computer connects to the Internet. DirectAccess was introduced in Windows
Server 2008 R2, providing this service to Windows 7 and Windows 8
"Enterprise" edition clients. In 2010, Microsoft Forefront Unified
Access Gateway (UAG) was released, which simplifies[1][2] the deployment of
DirectAccess for Windows 2008 R2, and includes additional components that make
it easier to integrate without the need to deploy IPv6 on the network, and with
a dedicated user interface for the configuration and monitoring. Some
requirements and limitations that were part of the design of DirectAccess with
Windows Server 2008 R2 and UAG have been changed (see requirements below).
While DirectAccess is based on Microsoft technology, third-party solutions
exist for accessing internal UNIX and Linux servers through DirectAccess.[3]
With Windows Server 2012, DirectAccess is fully integrated[4] into the
operating system, providing a user interface to configure and native IPv6 and
IPv4 support.
Question No 5. Differentiate between Daisy chain bus arbitration and Priority
encoded bus
arbitration.
Daisy Bus Arbitration
Centralized Bus Arbitration
Centralized bus arbitration requires hardware that will
grant the bus to one of the requesting devices. This hardware can be part of
the CPU or it can be a separate device on the motherboard.
Centralized One Level Bus Arbiter
This scheme is represented in Fig. 3-36 (a) from the
text. The lines needed are
Bus Request Line
This is a wired-OR line: the controller only knows that
a request has been made by a device, but doesn't know which device made the
request.
Bus Grant Line
This line is propagated through all of the devices.
When the controller sees that a bus request has been
made, it asserts the Bus Grant line to the first device.
If a device made a request, it will take control of the
bus when it receives the asserted Bus Grant Line and will leave the Bus Grant
line negated for the next device in the chain.
If the device didn't request the bus, then it will
assert the Bus Grant line for the next device in the chain.
If more than one device makes a request at the same
time, then the device that is closer to the arbiter will get the bus. This is
known as daisy-chaining. There is a discrete phase in the bus request cycle
where requests can be made. Many devices can request the bus during this phase.
Some details that are left to the imagination:
How does the device indicate that it is done? This
depends on the type of bus: synchronous or asynchronous. Synchronous means that
the device has a fixed time to handle the request, if it can't handle it in
that time, then it will issue a signal forcing the CPU to wait. The CPU would send a Halt signal to the
Arbiter so that the start of another cycle would be delayed. Otherwise, the
Arbiter will release the bus at a predefined time. Asynchronous means that the
bus is more complicated and has more lines to indicate that a request has been
completed. The bus would indicate to the CPU that the bus request was done.
This information would then be sent to the Arbiter via a Release Bus message to
indicate that the bus could be released. It is also different if DMA is being
used. DMA can be used for transferring blocks of data. The CPU initiates the
first read (or write) of a byte, then passes the remainder of the accesses to
the DMA controller. After each byte is processed by the device, it releases the
bus, so that a device with a higher priority could interrupt the current transfer.
When the entire block has been transferred, an interrupt is sent to the CPU.
When is the Bus Grant line negated? When the request is
completed. See above for details of how a device indicates it is done. When the
Bus Grant line is negated, then a new bus cycle can begin.
How often does a device make a request before it is
granted? The device asserts its Request line until it gets the bus. If a device
with a high priority is busy, then a lower priority device could wait a long
time before having its request granted.
Centralized Two Level Bus Arbiter
This scheme is represented in Fig 3-36 (b) from the
text. The lines needed are:
Bus Request: one for each level.
Bus Grant: one for each level.
This helps to alleviate the problem that the closest
device to the controller always gets the device. If requests are made on more
than one request line during the same clock cycle, then only the highest
priority is granted the bus. The advantage to this is that once the bus has
been granted to a lower priority device, a higher priority device can't steal
the bus. However, if a higher priority device makes a request during each
cycle, then the lower priority device will never get the bus.
Decentralized Bus Arbitration
In decentralized arbitration there isn't an arbiter, so
the devices have to decide who goes next. This makes the devices more
complicated, but saves the expense of having an arbiter.
VAX SBI Bus
On the VAX by DEC, there are 16 separate request lines.
All devices monitor all the request lines, and know their own priority level.
If a device wants the bus, it must first check to see if a higher priority
device wants the bus: if not, then it gets the bus, otherwise it must wait.
Things left to the imagination:
When does a device negate its request line? After it
completes its request.
How does a device know that the bus is in use? By
seeing if any device with a higher priority has requested it. When all higher
prority buses have finished their requests, then they will negate their request
lines.
Multibus
This scheme is represented in Fig. 3-37 of the text.
The necessary lines are:
Arbitration line
This acts as a line to indicate that the bus is being
granted. If the IN line is asserted, then a device knows that the bus has been
granted to it. If the IN line is negated, then the bus has not been granted.
When the bus is available and no device wants the bus, all the IN lines for
each device will be asserted. The device attempts to grab the bus by negating
OUT. However, the device must wait a period of time before asserting BUSY, to
make sure that a device with a higher priority hasn't negated its own OUT line.
Busy
This line indicates whether another device has made a
request. If Busy is negated, then the device negates OUT and waits an
undetermined amount of time to see if its IN will be negated.
Timbus
The above indicates that there are discrete phases in
the bus grant cycle: 1) request bus and negate OUT, 2) wait to see if IN gets
negated, if not then assert BUSY. Here is another solution that I made up that
doesn't require a waiting period. See if it makes sense to you.
Arbitration Line
This acts as a line to indicate that the bus is being
granted. If the IN line is asserted, then a device knows that the bus has been
granted to it. If the IN line is negated, then the bus has not been granted.
When the bus is available and no device wants the bus, only the IN line to the
first device will be asserted, all others will be negated.
Busy
PRIORITY ENCODED BUS ATTRIBUTION:
Priority encoded arbitration Here, each device has a
request line connected to a centralized arbiter that determines which device
will be granted access to the bus. The order may be fixed by the order of
connection (priority encoded), or it may be determined by some algorithm
preloaded into the arbiter. Figure gif shows this type of system. Note that
each device has a separate line to the bus arbiter. (The bus_grant signals have
been omitted for clarity.)
Question No 6. Differentiate between encryption and decryption. What are
the two
basic
methods for encryption?
Encryption: is the process in which readable text is
converts into the unreadable text. In which the readable text is referred as
"Plain text" while unreadable text is referred as cipher text.
Decryption is the process in which unreadable text is
converts back into the readable text.
The decryption process is the reverse process of the
encryption.
Encryption
process is used to make data which is non-readable to the normal user. While in
decryption process is used to make
readable data which is previously encrypted.
3:Encrypting 1.5 MB byte array with
encryptionCipher.doFinal() (which is instance of javax.crypto.Cipher)
Decrypting 1.5
MB byte array with decryptionCipher. doFinal() (which is instance of
javax.crypto.Cipher)
BASIC METHOD FORN ENCRYPTION:
1.
Hashing
Hashing creates a unique, fixed-length signature for a
message or data set. Each “hash” is unique to a specific message, so minor
changes to that message would be easy to track. Once data is encrypted using
hashing, it cannot be reversed or deciphered. Hashing, then, though not
technically an encryption method as such, is still useful for proving data
hasn’t been tampered with.
2. Symmetric methods
Symmetric encryption is also known as private-key
cryptography, and is called so because the key used to encrypt and decrypt the
message must remain secure, because anyone with access to it can decrypt the
data. Using this method, a sender encrypts the data with one key, sends the
data (the ciphertext) and then the receiver uses the key to decrypt the data.
---------------------------------------------------------
No comments:
Post a Comment