GK Test(કોમ્પ્યુટર શિક્ષણ)
Computer Knowledge
an electronic
machine that is used for storing, organizing, and finding words, numbers, and
pictures, for doing calculations, and for controlling other machines: a personal/home computer. All our customer
orders are handled by computer. We've put all our records on computer. computer
software/hardware.
Computing basics
The first computers were
used primarily for numerical calculations. However, as any information can be
numerically encoded, people soon realized that computers are capable of
general-purpose information processing. Their capacity to handle large amounts
of data has extended the range and accuracy of weather forecasting. Their speed has allowed them to make
decisions about routing telephone connections through a network and to control mechanical systems such as automobiles, nuclear
reactors, and robotic surgical tools. They are also cheap enough to be embedded in
everyday appliances and to make clothes dryers
and rice cookers “smart.” Computers have allowed us to pose and answer
questions that were difficult to pursue in the past. These questions might be
about DNA sequences in genes, patterns of activity in a
consumer market, or all the uses of a word in texts that have been stored in
a database. Increasingly, computers can also learn and adapt as
they operate by using processes such as machine learning.
GK Test(કોમ્પ્યુટર શિક્ષણ) ટેસ્ટ આપવા માટે નીચેની લીંક પર ક્લિક કરો.
https://forms.gle/cjUeC7GNrLgPiBWi9
Computers
also have limitations, some of which are theoretical. For example, there are
undecidable propositions whose truth cannot be determined within a given set of
rules, such as the logical structure of a computer. Because no universal algorithmic method
can exist to identify such propositions, a computer asked to obtain the truth
of such a proposition will (unless forcibly interrupted) continue
indefinitely—a condition known as the “halting
problem.” (See Turing machine.) Other limitations reflect current technology.
For example, although computers have progressed greatly in terms of processing
data and using artificial intelligence algorithms, they are limited by their incapacity to think in a
more holistic fashion. Computers may imitate humans—quite
effectively, even—but imitation may not replace the human element in social
interaction. Ethical concerns also limit computers, because computers
rely on data, rather than a moral compass
or human conscience, to make decisions.
Operating systems
Operating
systems manage a computer’s resources—memory, peripheral devices, and even CPU access—and provide a
battery of services to the user’s programs. UNIX, first developed for
minicomputers and now widely used on both PCs and mainframes, is one example;
Linux (a version of UNIX), Microsoft Corporation’s Windows XP, and Apple Computer’s
OS X are others.
One may
think of an operating system as a set of concentric shells. At the center is
the bare processor, surrounded by layers of operating system routines to manage
input/output (I/O), memory access, multiple processes, and communication among
processes. User programs are located in the outermost layers. Each layer
insulates its inner layer from direct access, while providing services to its
outer layer. This architecture frees
outer layers from having to know all the details of lower-level operations,
while protecting inner layers and their essential services from interference.
Early
computers had no operating system. A user loaded a program from paper tape by
employing switches to specify its memory address, to start loading, and to run
the program. When the program finished, the computer halted. The programmer had
to have knowledge of every computer detail, such as how much memory it had and
the characteristics of I/O devices used by the program.
It was
quickly realized that this was an inefficient use of resources, particularly as
the CPU was largely idle while waiting
for relatively slow I/O devices to finish tasks such as reading and writing
data. If instead several programs could be loaded at once and coordinated to
interleave their steps of computation and I/O, more work could be done. The
earliest operating systems were small supervisor programs that did just that:
they coordinated several programs, accepting commands from the operator, and
provided them all with basic I/O operations. These were known as
multiprogrammed systems.
A multiprogrammed system must schedule its
programs according to some priority rule, such as “shortest jobs first.” It
must protect them from mutual interference to prevent an addressing error in a
program from corrupting the data or code of another. It must ensure noninterference
during I/O so that output from several programs does not get commingled or
input misdirected. It might also have to record the CPU time of each job for
billing purposes.
Modern
types of operating systems
Multiuser systems
An extension of
multiprogramming systems was developed in the 1960s, known variously as
multiuser or time-sharing systems. (For a history of this development, see the
section Time-sharing from Project MAC to UNIX.) Time-sharing allows
many people to interact with a computer at once, each getting a small portion
of the CPU’s time. If the CPU is fast enough, it will appear to be dedicated to
each user, particularly as a computer can perform many functions while waiting
for each user to finish typing the latest commands.
Multiuser
operating systems employ a technique known as multiprocessing, or multitasking (as
do most single-user systems today), in which even a single program may consist
of many separate computational activities, called processes. The system must
keep track of active and queued processes, when each process must access
secondary memory to retrieve and store its code and data, and the allocation of
other resources, such as peripheral devices.
Since
main memory was very limited, early operating systems had to be as small as
possible to leave room for other programs. To overcome some of this limitation,
operating systems use virtual memory, one of many computing techniques developed
during the late 1950s under the direction of Tom Kilburn at
the University of Manchester, England. Virtual memory gives each
process a large address space (memory that it may use), often much larger than
the actual main memory. This address space resides in secondary memory (such as
tape or disks), from which portions are copied into main memory as needed,
updated as necessary, and returned when a process is no longer active. Even
with virtual memory, however, some “kernel” of the operating system has to
remain in main memory. Early UNIX kernels occupied tens of kilobytes; today
they occupy more than a megabyte, and PC operating systems are comparable, largely because of
the declining cost of main memory.
Operating systems have to maintain virtual
memory tables to keep track of where each process’s address space resides, and
modern CPUs provide special registers to make this more efficient. Indeed, much
of an operating system consists of tables: tables of processes, of files and
their locations (directories), of resources used by each process, and so on.
There are also tables of user accounts and passwords that help control access to the user’s files and protect them
against accidental or malicious interference.
No comments:
Post a Comment