WI-FI

Wi-Fi (pronounced /ˈwaɪfaɪ/) is a trademark of the Wi-Fi Alliance that may be used with certified products that belong to a class of wireless local area network (WLAN) devices based on the IEEE 802.11 standards. Because of the close relationship with its underlying standard, the term Wi-Fi is often used as a synonym for IEEE 802.11 technology.

The Wi-Fi Alliance is a global, non-profit association of companies that promotes WLAN technology and certifies products if they conform to certain standards of interoperability. Not every IEEE 802.11-compliant device is submitted for certification to the Wi-Fi Alliance, sometimes because of costs associated with the certification process and the lack of the Wi-Fi logo does not imply a device is incompatible with Wi-Fi devices.
Today, an IEEE 802.11 device is installed in many personal computers, video game consoles, smartphones, printers, and other peripherals, and virtually all laptop or palm-sized computers.
ternet access
A roof mounted Wi-Fi antenna
A Wi-Fi enabled device such as a personal computer, video game console, mobile phone, MP3 player or personal digital assistant can connect to the Internet when within range of a wireless network connected to the Internet. The coverage of one or more interconnected access points — called a hotspot — can comprise an area as small as a few rooms or as large as many square miles covered by a group of access points with overlapping coverage. Wi-Fi technology has been used in wireless mesh networks, for example, in London.[3]
In addition to private use in homes and offices, Wi-Fi can provide public access at Wi-Fi hotspots provided either free of charge or to subscribers to various commercial services. Organizations and businesses such as airports, hotels and restaurants often provide free hotspots to attract or assist clients. Enthusiasts or authorities who wish to provide services or even to promote business in selected areas sometimes provide free Wi-Fi access. As of 2008 there are more than 300 metropolitan-wide Wi-Fi (Muni-Fi) projects in progress.[4] There were 879 Wi-Fi based Wireless Internet service providers in the Czech Republic as of May 2008.[5][6]
Routers that incorporate a digital subscriber line modem or a cable modem and a Wi-Fi access point, often set up in homes and other premises, provide Internet-access and internetworking to all devices connected (wirelessly or by cable) to them. One can also connect Wi-Fi devices in ad hoc mode for client-to-client connections without a router. Wi-Fi also enables places that would traditionally not have network to be connected, for example bathrooms, kitchens and garden sheds.
Airport Wi-Fi
In September of 2003, Pittsburgh International Airport became the first airport to allow and offer free Wi-Fi throughout its terminal.[7] It is now commonplace.
City-wide Wi-Fi
Further information: Municipal wireless network
Wikibooks has a book on the topic of
Nets, Webs and the Information Infrastructure


A municipal wireless antenna in Minneapolis
In the early 2000s, many cities around the world announced plans for a city wide Wi-Fi network. This proved to be much more difficult than their promoters initially envisioned with the result that most of these projects were either canceled or placed on indefinite hold. A few were successful, for example in 2005, Sunnyvale, California became the first city in the United States to offer city wide free Wi-Fi.[8] Few of the Municipal Wi-Fi firms have now entered into the field of Smart grid networks.[9][clarification needed]
Campus-wide Wi-Fi
Carnegie Mellon University built the first wireless Internet network in the world at their Pittsburgh campus in 1994, long before the Wi-Fi standard was adopted.[10]
Direct computer-to-computer communications
Wi-Fi also allows communications directly from one computer to another without the involvement of an access point. This is called the ad-hoc mode of Wi-Fi transmission. This wireless ad-hoc network mode has proven popular with multiplayer handheld game consoles, such as the Nintendo DS, digital cameras, and other consumer electronics devices. A similar method is a new specification called Wi-Fi Direct which is promoted by the Wi-Fi Alliance for file transfers and media sharing through a new discovery and security methodology.[11]
Future directions
As of 2010 Wi-Fi technology had spread widely within business and industrial sites. In business environments, just like other environments, increasing the number of Wi-Fi access-points provides redundancy, support for fast roaming and increased overall network-capacity by using more channels or by defining smaller cells. Wi-Fi enables wireless voice-applications (VoWLAN or WVOIP). Over the years, Wi-Fi implementations have moved toward "thin" access-points, with more of the network intelligence housed in a centralized network appliance, relegating individual access-points to the role of mere "dumb" radios. Outdoor applications may utilize true mesh topologies. As of 2007 Wi-Fi installations can provide a secure computer networking gateway, firewall, DHCP server, intrusion detection system, and other functions.
History
Wi-Fi uses both single-carrier direct-sequence spread spectrum radio technology (part of the larger family of spread spectrum systems) and multi-carrier orthogonal frequency-division multiplexing (OFDM) radio technology. The deregulation of certain radio-frequencies for unlicensed spread spectrum deployment enabled the development of Wi-Fi products, its onetime competitor HomeRF, Bluetooth, and many other products such as some types of cordless telephones.
Unlicensed spread spectrum was first made available in the US by the FCC in rules adopted on May 9, 1985[12] and these FCC regulations were later copied with some changes in many other countries enabling use of this technology in all major countries. The FCC action was proposed by Michael Marcus of the FCC staff in 1980 and the subsequent regulatory action took 5 more years. It was part of a broader proposal to allow civil use of spread spectrum technology and was opposed at the time by mainstream equipment manufacturers and many radio system operators.


Half-size ISA 2.4 GHz WaveLAN card by AT&T
Wi-Fi technology has its origins in a 1985 ruling by the U.S. Federal Communications Commission that released several bands of the radio spectrum for unlicensed use.[14] The precursor to the common Wi-Fi system was invented in 1991 by NCR Corporation/AT&T (later Lucent Technologies & Agere Systems) in Nieuwegein, the Netherlands. It was initially intended for cashier systems; the first wireless products were brought on the market under the name WaveLAN with speeds of 1 Mbit/s to 2 Mbit/s. Vic Hayes, who held the chair of IEEE 802.11 for 10 years and has been named the "father of Wi-Fi," was involved in designing standards such as IEEE 802.11b, and 802.11a.
Key portions of the IEEE 802.11 technology underlying Wi-Fi (in its a, g, and n varieties) were determined to be infringing on U.S. Patent 5,487,069, which was filed in 1993[15] by CSIRO, an Australian research body. The patent has been the subject of protracted and ongoing legal battles between CSIRO and major IT corporations. In 2009, the CSIRO settled with 14 companies, including Hewlett-Packard, Intel, Dell, Toshiba, ASUS, Microsoft and Nintendo, under confidential terms. The revenue arising from these settlements to October 2009 is approximately AU$200 million. [16][17][18][19][20][21]
Europe leads overall in uptake of wireless-phone technology but the US leads in Wi-Fi systems partly because they lead in laptop usage. As of July 2005, there were at least 68,643 Wi-Fi locations worldwide, a majority in the US, then the UK and Germany. The US and Western Europe make up about 80% of the worldwide Wi-Fi users. Plans are underway in areas of the US to provide public Wi-Fi coverage as a public free service. Even with these large numbers and more expansion, the extent of actual Wi-Fi usage is lower than expected. Jupiter Research found that only 15% of people have used Wi-Fi and only 6% in a public place.[22]
Wi-Fi certification

Main article: Wi-Fi Alliance
Wi-Fi technology is based on IEEE 802.11 standards. The IEEE develops and publishes these standards, but does not test equipment for compliance with them. The non-profit Wi-Fi Alliance was formed in 1999 to fill this void — to establish and enforce standards for interoperability and backward compatibility, and to promote wireless local area network technology. Today the Wi-Fi Alliance consists of more than 300 companies from around the world.[23][24] Manufacturers with membership in the Wi-Fi Alliance, whose products pass the certification process, are permitted to mark those products with the Wi-Fi logo.
Specifically, the certification process requires conformance to the IEEE 802.11 radio standards, the WPA and WPA2 security standards, and the EAP authentication standard. Certification may optionally include tests of IEEE 802.11 draft standards, interaction with cellular phone technology in converged devices, and features relating to security set-up, multimedia, and power saving.


The Wi-Fi name
The term Wi-Fi suggests Wireless Fidelity, compared with the long-established audio equipment certification term High Fidelity or Hi-Fi. Wireless Fidelity has often been used, even by the Wi-Fi Alliance itself in its press releases[26][27] and documents;[28][29] the term may also be found in a white paper on Wi-Fi from ITAA.[30] However, based on Phil Belanger's[31] statement, the term Wi-Fi was never supposed to mean anything at all.[32][33]
The term Wi-Fi, first used commercially in August 1999,[34] was coined by a brand consulting firm called Interbrand Corporation that had been hired by the Alliance to determine a name that was "a little catchier than 'IEEE 802.11b Direct Sequence'."[35][32][33] Mr Belanger also said, Interbrand invented Wi-Fi as a play on words with Hi-Fi, and also created the yin yang-style Wi-Fi logo. The term Wireless Fidelity was used later as an explanation of what Wi-Fi means.
The Wi-Fi Alliance initially used an advertising slogan for Wi-Fi, "The Standard for Wireless Fidelity",[32] but later removed the phrase from their marketing. Despite this, some documents from the Alliance dated 2003 and 2004 still contain the term Wireless Fidelity.[28][29] There was also no official statement for dropping the term.
The yin yang logo indicates that a product had been certified for interoperability.[28]
Advantages and challenges


A keychain size Wi-Fi detector.
Operational advantages
Wi-Fi allows local area networks (LANs) to be deployed without wires for client devices, typically reducing the costs of network deployment and expansion. Spaces where cables cannot be run, such as outdoor areas and historical buildings, can host wireless LANs.
Wireless network adapters are now built into most laptops. The price of chipsets for Wi-Fi continues to drop, making it an economical networking option included in even more devices. Wi-Fi has become widespread in corporate infrastructures.
Different competitive brands of access points and client network interfaces are inter-operable at a basic level of service. Products designated as "Wi-Fi Certified" by the Wi-Fi Alliance are backwards compatible. Wi-Fi is a global set of standards. Unlike mobile phones, any standard Wi-Fi device will work anywhere in the world.

Wi-Fi is widely available in more than 220,000 public hotspots and tens of millions of homes and corporate and university campuses worldwide.[36] The current version of Wi-Fi Protected Access encryption (WPA2) is considered secure, provided a strong passphrase is used. New protocols for Quality of Service (WMM) make Wi-Fi more suitable for latency-sensitive applications (such as voice and video), and power saving mechanisms (WMM Power Save) improve battery operation.
Limitations
Spectrum assignments and operational limitations are not consistent worldwide. Most of Europe allows for an additional 2 channels beyond those permitted in the U.S. for the 2.4 GHz band. (1–13 vs. 1–11); Japan has one more on top of that (1–14). Europe, as of 2007, was essentially homogeneous in this respect. A very confusing aspect is the fact that a Wi-Fi signal actually occupies five channels in the 2.4 GHz band resulting in only three non-overlapped channels in the U.S.: 1, 6, 11, and three or four in Europe: 1, 5, 9, 13 can be used if all the equipment on a specific area can be guaranteed not to use 802.11b at all, even as fallback or beacon. Equivalent isotropically radiated power (EIRP) in the EU is limited to 20 dBm (100 mW).
Reach
See also: Long-range Wi-Fi
Large satellite dish modified for long-range Wi-Fi communications in Venezuela
Wi-Fi networks have limited range. A typical wireless router using 802.11b or 802.11g with a stock antenna might have a range of 32 m (120 ft) indoors and 95 m (300 ft) outdoors. The new IEEE 802.11n however, can exceed that range by more than double.[citation needed] Range also varies with frequency band. Wi-Fi in the 2.4 GHz frequency block has slightly better range than Wi-Fi in the 5 GHz frequency block. Outdoor ranges - through use of directional antennas - can be improved with antennas located several kilometres or more from their base. In general, the maximum amount of power that a Wi-Fi device can transmit is limited by local regulations, such as FCC Part 15[37] in USA.

Wi-Fi performance decreases roughly quadratically[citation needed] as distance increases at constant radiation levels.
Due to reach requirements for wireless LAN applications, power consumption is fairly high compared to some other standards. Technologies such as Bluetooth, that are designed to support wireless PAN applications, provide a much shorter propagation range of <10m>

database

Database
A database is an integrated collection of logically-related records or files consolidated into a common pool that provides data for one or more multiple uses. One way of classifying databases involves the type of content, for example: bibliographic, full-text, numeric, image. Other classification methods start from examining database models or database architectures: see below. Software organizes the data in a database according to a database model. As of 2010 the relational model occurs most commonly. Other models such as the hierarchical model and the network model use a more explicit representation of relationships.
Architecture
A number of database architectures exist. Many databases use a combination of strategies.
Databases consist of software-based "containers" that are structured to collect and store information so users can retrieve, add, update or remove such information in an automatic fashion. Database programs are designed for users so that they can add or delete any information needed. The structure of a database is tabular, consisting of rows and columns of information.
Online Transaction Processing systems (OLTP) often use a "row oriented" or an "object oriented" data store architecture, whereas data-warehouse and other retrieval focused applications like Google's BigTable, or bibliographic database (library catalog) systems may use a Column oriented DBMS architecture.
Document-Oriented, XML, knowledgebases, as well as frame databases and RDF-stores (also known as triple stores), may also use a combination of these architectures in their implementation.
Not all databases have or need a database schema ("schema-less databases").
Over many years general-purpose database systems have dominated the database industry. These offer a wide range of functions, applicable to many, if not most circumstances in modern data processing. These have been enhanced with extensible datatypes (pioneered in the PostgreSQL project) to allow development of a very wide range of applications.
There are also other types of databases which cannot be classified as relational databases. Most notable is the object database management system, which stores language objects natively without using a separate data definition language and without translating into a separate storage schema. Unlike relational systems, these object databases store the relationship between complex data types as part of their storage model in a way that does not require runtime calculation of related data using relational algebra execution algorithms.
Database management systems
Main article: Database management system
A database management system (DBMS) consists of software that organizes the storage of data. A DBMS controls the creation, maintenance, and use of the database storage structures of social organizations and of their users. It allows organizations to place control of organization wide database development in the hands of Database Administrators (DBAs) and other specialists. In large systems, a DBMS allows users and other software to store and retrieve data in a structured way.
Database management systems are usually categorized according to the database model that they support, such as the network, relational or object model. The model tends to determine the query languages that are available to access the database. One commonly used query language for the relational database is SQL, although SQL syntax and function can vary from one DBMS to another. A common query language for the object database is OQL, although not all vendors of object databases implement this, majority of them do implement this method. A great deal of the internal engineering of a DBMS is independent of the data model, and is concerned with managing factors such as performance, concurrency, integrity, and recovery from hardware failures. In these areas there are large differences between the products.
A relational database management system (RDBMS) implements features of the relational model. In this context, Date's "Information Principle" states: "the entire information content of the database is represented in one and only one way. Namely as explicit values in column positions (attributes) and rows in relations (tuples). Therefore, there are no explicit pointers between related tables." This contrasts with the object database management system (ODBMS), which does store explicit pointers between related types.
Components of DBMS
According to the wikibooks open-content textbooks, "Design of Main Memory Database System/Overview of DBMS", most DBMS as of 2009 implement a relational model. Other less-used DBMS systems, such as the object DBMS, generally operate in areas of application-specific data management where performance and scalability take higher priority than the flexibility of ad hoc query capabilities provided via the relational-algebra execution algorithms of a relational DBMS.
RDBMS components
• Interface drivers - A user or application program initiates either schema modification or content modification. These drivers[which?] are built on top of SQL. They provide methods to prepare statements, execute statements, fetch results, etc. Examples include DDL, DCL, DML, ODBC, and JDBC. Some vendors provide language-specific proprietary interfaces. For example MySQL provides drivers for PHP, Python, etc.
• SQL engine - This component interprets and executes the SQL query. It comprises three major components (compiler, optimizer, and execution engine).
• Transaction engine - Transactions are sequences of operations that read or write database elements, which are grouped together.
• Relational engine - Relational objects such as Table, Index, and Referential integrity constraints are implemented in this component.
• Storage engine - This component stores and retrieves data records. It also provides a mechanism to store metadata and control information such as undo logs, redo logs, lock tables, etc.
[edit] ODBMS components
• Language drivers - A user or application program initiates either schema modification or content modification via the chosen programming language. The drivers then provide the mechanism to manage object lifecycle coupling of the application memory space with the underlying persistent storage. Examples include C++, Java, .NET, and Ruby.
• Query engine - This component interprets and executes language-specific query commands in the form of OQL, LINQ, JDOQL, JPAQL, others. The query engine returns language specific collections of objects which satisfy a query predicate expressed as logical operators e.g. >, <, >=, <=, AND, OR, NOT, GroupBY, etc.
• Transaction engine - Transactions are sequences of operations that read or write database elements, which are grouped together. The transaction engine is concerned with such things as data isolation and consistency in the driver cache and data volumes by coordinating with the storage engine.
• Storage engine - This component stores and retrieves objects in an arbitrarily complex model. It also provides a mechanism to manage and store metadata and control information such as undo logs, redo logs, lock graphs,
Primary tasks of DBMS packages
• Database Development: used to define and organize the content, relationships, and structure of the data needed to build a database.
• Database Interrogation: can access the data in a database for information retrieval and report generation. End users can selectively retrieve and display information and produce printed reports and documents.
• Database Maintenance: used to add, delete, update, correct, and protect the data in a database.
• Application Development: used to develop prototypes of data entry screens, queries, forms, reports, tables, and labels for a prototyped application. Or use 4GL or 4th Generation Language or application generator to develop program codes.
Types
Operational database
These databases store detailed data needed to support the operations of an entire organization. They are also called subject-area databases (SADB), transaction databases, and production databases. For example:
• customer databases
• personal database
• inventory databases
• accounting databases
Analytical database
These databases store data and information extracted from selected operational and external databases. They consist of summarized data and information most needed by an organization's management and other[which?] end-users. Some people refer to analytical databases as multidimensional databases, management databases, or information databases.
Data warehouse
A data warehouse stores data from current and previous years — data extracted from the various operational databases of an organization. It becomes the central source of data that has been screened, edited, standardized and integrated so that it can be used by managers and other end-user professionals throughout an organization. Data warehouses are characterized by being slow to insert into but fast to retrieve from. Recent developments in data warehousing have led to the use of a Shared nothing architecture to facilitate extreme scaling.
Distributed database
These are databases of local work-groups and departments at regional offices, branch offices, manufacturing plants and other work sites. These databases can include segments of both common operational and common user databases, as well as data generated and used only at a user’s own site.
End-user database
These databases consist of a variety of data files developed by end-users at their workstations. Examples of these are collections of documents in spreadsheets, word processing and even downloaded files.
External database
These databases provide access to external, privately-owned data online — available for a fee to end-users and organizations from commercial services. Access to a wealth of information from external database is available for a fee from commercial online services and with or without charge from many sources in the Internet.
Hypermedia databases on the web
These are a set of interconnected multimedia pages at a web-site. They consist of a home page and other hyperlinked pages[citation needed] of multimedia or mixed media such as text, graphic, photographic images, video clips, audio etc.
Navigational database
In navigational databases, queries find objects primarily by following references from other objects. Traditionally navigational interfaces are procedural, though one could characterize some modern systems like XPath as being simultaneously navigational and declarative.
In-memory databases
In-memory databases primarily rely on main memory for computer data storage. This contrasts with database management systems which employ a disk-based storage mechanism. Main memory databases are faster than disk-optimized databases since[citation needed] the internal optimization algorithms are simpler and execute fewer CPU instructions. Accessing data in memory provides faster and more predictable performance than disk. In applications where response time is critical, such as telecommunications network equipment that operates emergency systems, main memory databases are often used.
Document-oriented databases
Main article: Document-oriented database
Document-oriented databases are computer programs designed for document-oriented applications. These systems may be implemented as a layer above a relational database or an object database. As opposed to relational databases, document-based databases do not store data in tables with uniform sized fields for each record. Instead, they store each record as a document that has certain characteristics. Any number of fields of any length can be added to a document. Fields can also contain multiple pieces of data.
Real-time databases
A real-time database is a processing system designed to handle workloads whose state may change constantly. This differs from traditional databases containing persistent data, mostly unaffected by time. For example, a stock market changes rapidly and dynamically. Real-time processing means that a transaction is processed fast enough for the result to come back and be acted on right away. Real-time databases are useful for accounting, banking, law, medical records, multi-media, process control, reservation systems, and scientific data analysis. As computers increase in power and can store more data, real-time databases become integrated into society and are employed in many applications.

Computer network

A computer network is a group of computers that are interconnected by electronic circuits or wireless transmissions of various designs and technologies for the purpose of exchanging data or communicating information between them or their users. Networks may be classified according to a wide variety of characteristics. This article provides a general overview of types and categories and also presents the basic components of a network.
Introduction
A computer network allows sharing of resources and information among devices connected to the network. The Advanced Research Projects Agency (ARPA) funded the design of the Advanced Research Projects Agency Network (ARPANET) for the United States Department of Defense. It was the first operational computer network in the world.[1] Development of the network began in 1969, based on designs developed during the 1960s. For a history see ARPANET, the first network.
Purpose
• Facilitating communications. Using a network, people can communicate efficiently and easily via e-mail, instant messaging, chat rooms, telephony, video telephone calls, and videoconferencing.
• Sharing hardware. In a networked environment, each computer on a network can access and use hardware on the network. Suppose several personal computers on a network each require the use of a laser printer. If the personal computers and a laser printer are connected to a network, each user can then access the laser printer on the network, as they need it.
• Sharing files, data, and information. In a network environment, any authorized user can access data and information stored on other computers on the network. The capability of providing access to data and information on shared storage devices is an important feature of many networks.
• Sharing software. Users connected to a network can access application programs on the network.
Network classification
The following list presents categories used for classifying networks.
Connection method
Computer networks can be classified according to the hardware and software technology that is used to interconnect the individual devices in the network, such as optical fiber, Ethernet, Wireless LAN, HomePNA, Power line communication or G.hn.
Ethernet uses physical wiring to connect devices. Frequently deployed devices include hubs, switches, bridges and/or routers. Wireless LAN technology is designed to connect devices without wiring. These devices use radio waves or infrared signals as a transmission medium. ITU-T G.hn technology uses existing home wiring (coaxial cable, phone lines and power lines) to create a high-speed (up to 1 Gigabit/s) local area network.
Wired technologies
• Twisted-pair wire is the most widely used medium for telecommunication. Twisted-pair wires are ordinary telephone wires which consist of two insulated copper wires twisted into pairs and are used for both voice and data transmission. The use of two wires twisted together helps to reduce crosstalk and electromagnetic induction. The transmission speed ranges from 2 million bits per second to 100 million bits per second.
• Coaxial cable is widely used for cable television systems, office buildings, and other worksites for local area networks. The cables consist of copper or aluminum wire wrapped with insulating layer typically of a flexible material with a high dielectric constant, all of which are surrounded by a conductive layer. The layers of insulation help minimize interference and distortion. Transmission speed range from 200 million to more than 500 million bits per second.
• Fiber optic cable consists of one or more filaments of glass fiber wrapped in protective layers. It transmits light which can travel over extended distances without signal loss. Fiber-optic cables are not affected by electromagnetic radiation. Transmission speed may reach trillions of bits per second. The transmission speed of fiber optics is hundreds of times faster than for coaxial cables and thousands of times faster than for twisted-pair wire.
Wireless technologies
• Terrestrial Microwave – Terrestrial microwaves use Earth-based transmitter and receiver. The equipment look similar to satellite dishes. Terrestrial microwaves use low-gigahertz range, which limits all communications to line-of-sight. Path between relay stations spaced approx. 30 miles apart. Microwave antennas are usually placed on top of buildings, towers, hills, and mountain peaks.
• Communications Satellites – The satellites use microwave radio as their telecommunications medium which are not deflected by the Earth's atmosphere. The satellites are stationed in space, typically 22,000 miles above the equator. These Earth-orbiting systems are capable of receiving and relaying voice, data, and TV signals.
• Cellular and PCS Systems – Use several radio communications technologies. The systems are divided to different geographic area. Each area has low-power transmitter or radio relay antenna device to relay calls from one area to the next area.
• Wireless LANs – Wireless local area network use a high-frequency radio technology similar to digital cellular and a low-frequency radio technology. Wireless LANs use spread spectrum technology to enable communication between multiple devices in a limited area. An example of open-standards wireless radio-wave technology is IEEE 802.11b.
• Bluetooth – A short range wireless technology. Operate at approx. 1Mbps with range from 10 to 100 meters. Bluetooth is an open wireless protocol for data exchange over short distances.
• The Wireless Web – The wireless web refers to the use of the World Wide Web through equipments like cellular phones, pagers,PDAs, and other portable communications devices. The wireless web service offers anytime/anywhere connection.
Scale
Networks are often classified as local area network (LAN), wide area network (WAN), metropolitan area network (MAN), personal area network (PAN), virtual private network (VPN), campus area network (CAN), storage area network (SAN), and others, depending on their scale, scope and purpose. Usage, trust level, and access right often differ between these types of network. For example, LANs tend to be designed for internal use by an organization's internal systems and employees in individual physical locations (such as a building), while WANs may connect physically separate parts of an organization and may include connections to third parties.
Functional relationship (network architecture)
Computer networks may be classified according to the functional relationships which exist among the elements of the network, e.g., active networking, client-server and peer-to-peer (workgroup) architecture.
Network topology
Computer networks may be classified according to the network topology upon which the network is based, such as bus network, star network, ring network, mesh network, star-bus network, tree or hierarchical topology network. Network topology is the coordination by which devices in the network are arrange in their logical relations to one another, independent of physical arrangement. Even if networked computers are physically placed in a linear arrangement and are connected to a hub, the network has a star topology, rather than a bus topology. In this regard the visual and operational characteristics of a network are distinct. Networks may be classified based on the method of data used to convey the data, these include digital and analog networks.
Types of networks
Common types of computer networks may be identified by their scale.
Personal area network
A personal area network (PAN) is a computer network used for communication among computer and different information technological devices close to one person. Some examples of devices that are used in a PAN are personal computers, printers, fax machines, telephones, PDAs, scanners, and even video game consoles. A PAN may include wired and wireless connections between devices. The reach of a PAN typically extends to 10 meters.[2] Wired PAN network is usually constructed with USB and Firewire while wireless with Bluetooth and Infrared.[3]
Local area network
A local area network (LAN) is a network that connects computers and devices in a limited geographical area such as home, school, computer laboratory, office building, or closely positioned group of buildings. Each computer or device on the network is a node. Current wired LANs are most likely to be based on Ethernet technology, although new standards like ITU-T G.hn also provide a way to create a wired LAN using existing home wires (coaxial cables, phone lines and power lines)[4].



Typical library network, in a branching tree topology and controlled access to resources
All interconnected devices must understand the network layer (layer 3), because they are handling multiple subnets (the different colors). Those inside the library, which have only 10/100 Mbit/s Ethernet connections to the user device and a Gigabit Ethernet connection to the central router, could be called "layer 3 switches" because they only have Ethernet interfaces and must understand IP. It would be more correct to call them access routers, where the router at the top is a distribution router that connects to the Internet and academic networks' customer access routers.
The defining characteristics of LANs, in contrast to WANs (Wide Area Networks), include their higher data transfer rates, smaller geographic range, and no need for leased telecommunication lines. Current Ethernet or other IEEE 802.3 LAN technologies operate at speeds up to 10 Gbit/s. This is the data transfer rate. IEEE has projects investigating the standardization of 40 and 100 Gbit/s.[5]
Home area network
A home area network (HAN) or home network is a residential local area network. It is used for communication between digital devices typically deployed in the home, usually a small number of personal computers and accessories, such as printers and mobile computing devices. An important function is the sharing of Internet access, often a broadband service through a CATV or Digital Subscriber Line (DSL) provider.
Campus area network
A campus area network (CAN) is a computer network made up of an interconnection of local area networks (LANs) within a limited geographical area. It can be considered one form of a metropolitan area network, specific to an academic setting.
In the case of a university campus-based campus area network, the network is likely to link a variety of campus buildings including; academic departments, the university library and student residence halls. A campus area network is larger than a local area network but smaller than a wide area network (WAN) (in some cases).
The main aim of a campus area network is to facilitate students accessing internet and university resources. This is a network that connects two or more LANs but that is limited to a specific and contiguous geographical area such as a college campus, industrial complex, office building, or a military base. A CAN may be considered a type of MAN (metropolitan area network), but is generally limited to a smaller area than a typical MAN. This term is most often used to discuss the implementation of networks for a contiguous area. This should not be confused with a Controller Area Network. A LAN connects network devices over a relatively short distance. A networked office building, school, or home usually contains a single LAN, though sometimes one building will contain a few small LANs (perhaps one per room), and occasionally a LAN will span a group of nearby buildings.
Metropolitan area network
A metropolitan area network (MAN) is a network that connects two or more local area networks or campus area networks together but does not extend beyond the boundaries of the immediate town/city. Routers, switches and hubs are connected to create a metropolitan area network.
Wide area network
A wide area network (WAN) is a computer network that covers a large geographic area such as a city, country, or spans even intercontinental distances, using a communications channel that combines many types of media such as telephone lines, cables, and air waves. A WAN often uses transmission facilities provided by common carriers, such as telephone companies. WAN technologies generally function at the lower three layers of the OSI reference model: the physical layer, the data link layer, and the network layer.
Global area network
A global area network (GAN) is a model for supporting mobile communications across an arbitrary number of wireless LANs, satellite coverage areas, etc. The key challenge in mobile communications is handing off the user communications from one local coverage area to the next. In IEEE Project 802, this involves a succession of terrestrial WIRELESS local area networks (WLAN).[6]
Virtual private network
A virtual private network (VPN) is a computer network in which some of the links between nodes are carried by open connections or virtual circuits in some larger network (e.g., the Internet) instead of by physical wires. The data link layer protocols of the virtual network are said to be tunneled through the larger network when this is the case. One common application is secure communications through the public Internet, but a VPN need not have explicit security features, such as authentication or content encryption. VPNs, for example, can be used to separate the traffic of different user communities over an underlying network with strong security features.
A VPN may have best-effort performance, or may have a defined service level agreement (SLA) between the VPN customer and the VPN service provider. Generally, a VPN has a topology more complex than point-to-point.
A VPN allows computer users to appear to be editing from an IP address location other than the one which connects the actual computer to the Internet.
Internetwork
An Internetwork is the connection of two or more distinct computer networks via a common routing technology. The result is called an internetwork (often shortened to internet). Two or more networks connect using devices that operate at the [Network Layer]] (Layer 3) of the OSI Basic Reference Model, such as a router. Any interconnection among or between public, private, commercial, industrial, or governmental networks may also be defined as an internetwork.
Internet
The Internet is a global system of interconnected governmental, academic, public, and private computer networks. It is based on the networking technologies of the Internet Protocol Suite. It is the successor of the Advanced Research Projects Agency Network (ARPANET) developed by DARPA of the U.S. Department of Defense. The Internet is also the communications backbone underlying the World Wide Web (WWW). The 'Internet' is most commonly spelled with a capital 'I' as a proper noun, for historical reasons and to distinguish it from other generic internetworks.





Programming language

A programming language is an artificial language designed to express computations that can be performed by a machine, particularly a computer. Programming languages can be used to create programs that control the behavior of a machine, to express algorithms precisely, or as a mode of human communication.
Many programming languages have some form of written specification of their syntax (form) and semantics (meaning). Some languages are defined by a specification document. For example, the C programming language is specified by an ISO Standard. Other languages, such as Perl, have a dominant implementation that is used as a reference.
The earliest programming languages predate the invention of the computer, and were used to direct the behavior of machines such as Jacquard looms and player pianos. Thousands of different programming languages have been created, mainly in the computer field, with many more being created every year. Most programming languages describe computation in an imperative style, i.e., as a sequence of commands, although some languages, such as those that support functional programming or logic programming, use alternative forms of description.
Definitions
A programming language is a notation for writing programs, which are specifications of a computation or algorithm.[1] Some, but not all, authors restrict the term "programming language" to those languages that can express all possible algorithms.[1][2] Traits often considered important for what constitutes a programming language include:
• Function and target: A computer programming language is a language[3] used to write computer programs, which involve a computer performing some kind of computation[4] or algorithm and possibly control external devices such as printers, disk drives, robots,[5] and so on. For example PostScript programs are frequently created by another program to control a computer printer or display. More generally, a programming language may describe computation on some, possibly abstract, machine. It is generally accepted that a complete specification for a programming language includes a description, possibly idealized, of a machine or processor for that language.[6] In most practical contexts, a programming language involves a computer; consequently programming languages are usually defined and studied this way.[7] Programming languages differ from natural languages in that natural languages are only used for interaction between people, while programming languages also allow humans to communicate instructions to machines.

• Abstractions: Programming languages usually contain abstractions for defining and manipulating data structures or controlling the flow of execution. The practical necessity that a programming language support adequate abstractions is expressed by the abstraction principle;[8] this principle is sometimes formulated as recommendation to the programmer to make proper use of such abstractions.[9]
• Expressive power: The theory of computation classifies languages by the computations they are capable of expressing. All Turing complete languages can implement the same set of algorithms. ANSI/ISO SQL and Charity are examples of languages that are not Turing complete, yet often called programming languages.[10][11]
Markup languages like XML, HTML or troff, which define structured data, are not generally considered programming languages.[12][13][14] Programming languages may, however, share the syntax with markup languages if a computational semantics is defined. XSLT, for example, is a Turing complete XML dialect.[15][16][17] Moreover, LaTeX, which is mostly used for structuring documents, also contains a Turing complete subset.[18][19]
The term computer language is sometimes used interchangeably with programming language.[20] However, the usage of both terms varies among authors, including the exact scope of each. One usage describes programming languages as a subset of computer languages.[21] In this vein, languages used in computing that have a different goal than expressing computer programs are generically designated computer languages. For instance, markup languages are sometimes referred to as computer languages to emphasize that they are not meant to be used for programming.[22] Another usage regards programming languages as theoretical constructs for programming abstract machines, and computer languages as the subset thereof that runs on physical computers, which have finite hardware resources.[23] John C. Reynolds emphasizes that formal specification languages are just as much programming languages as are the languages intended for execution. He also argues that textual and even graphical input formats that affect the behavior of a computer are programming languages, despite the fact they are commonly not Turing-complete, and remarks that ignorance of programming language concepts is the reason for many flaws in input formats.

Elements
All programming languages have some primitive building blocks for the description of data and the processes or transformations applied to them (like the addition of two numbers or the selection of an item from a collection). These primitives are defined by syntactic and semantic rules which describe their structure and meaning respectively.
Not all syntactically correct programs are semantically correct. Many syntactically correct programs are nonetheless ill-formed, per the language's rules; and may (depending on the language specification and the soundness of the implementation) result in an error on translation or execution. In some cases, such programs may exhibit undefined behavior. Even when a program is well-defined within a language, it may still have a meaning that is not intended by the person who wrote it.
Using natural language as an example, it may not be possible to assign a meaning to a grammatically correct sentence or the sentence may be false:
• "Colorless green ideas sleep furiously." is grammatically well-formed but has no generally accepted meaning.
• "John is a married bachelor." is grammatically well-formed but expresses a meaning that cannot be true.
The following C language fragment is syntactically correct, but performs an operation that is not semantically defined (because p is a null pointer, the operations p->real and p->im have no meaning):
complex *p = NULL;
complex abs_p = sqrt (p->real * p->real + p->im * p->im);
The grammar needed to specify a programming language can be classified by its position in the Chomsky hierarchy. The syntax of most programming languages can be specified using a Type-2 grammar, i.e., they are context-free grammars.[25] Some languages, including Perl and Lisp, contain constructs that allow execution during the parsing phase. Languages that have constructs that allow the programmer to alter the behavior of the parser make syntax analysis an undecidable problem, and generally blur the distinction between parsing and execution.[26] In contrast to Lisp's macro system and Perl's BEGIN blocks, which may contain general computations, C macros are merely string replacements, and do not require code execution.[27]
Static semantics

The static semantics defines restrictions on the structure of valid texts that are hard or impossible to express in standard syntactic formalisms.[1] For compiled languages, static semantics essentially include those semantic rules that can be checked at compile time. Examples include checking that every identifier is declared before it is used (in languages that require such declarations) or that the labels on the arms of a case statement are distinct.[28] Many important restrictions of this type, like checking that identifiers are used in the appropriate context (e.g. not adding a integer to a function name), or that subroutine calls have the appropriate number and type of arguments can be enforced by defining them as rules in a logic called a type system. Other forms of static analyses like data flow analysis may also be part of static semantics. Newer programming languages like Java and C# have definite assignment analysis, a form of data flow analysis, as part of their static semantics.
Type system
Main articles: Type system and Type safety
A type system defines how a programming language classifies values and expressions into types, how it can manipulate those types and how they interact. The goal of a type system is to verify and usually enforce a certain level of correctness in programs written in that language by detecting certain incorrect operations. Any decidable type system involves a trade-off: while it rejects many incorrect programs, it can also prohibit some correct, albeit unusual programs. In order to bypass this downside, a number of languages have type loopholes, usually unchecked casts that may be used by the programmer to explicitly allow a normally disallowed operation between different types. In most typed languages, the type system is used only to type check programs, but a number of languages, usually functional ones, perform type inference, which relieves the programmer from writing type annotations. The formal design and study of type systems is known as type theory.

Typed versus untyped languages
A language is typed if the specification of every operation defines types of data to which the operation is applicable, with the implication that it is not applicable to other types.[29] For example, "this text between the quotes" is a string. In most programming languages, dividing a number by a string has no meaning. Most modern programming languages will therefore reject any program attempting to perform such an operation. In some languages, the meaningless operation will be detected when the program is compiled ("static" type checking), and rejected by the compiler, while in others, it will be detected when the program is run ("dynamic" type checking), resulting in a runtime exception.
A special case of typed languages are the single-type languages. These are often scripting or markup languages, such as REXX or SGML, and have only one data type—most commonly character strings which are used for both symbolic and numeric data.
In contrast, an untyped language, such as most assembly languages, allows any operation to be performed on any data, which are generally considered to be sequences of bits of various lengths.[29] High-level languages which are untyped include BCPL and some varieties of Forth.
In practice, while few languages are considered typed from the point of view of type theory (verifying or rejecting all operations), most modern languages offer a degree of typing.[29] Many production languages provide means to bypass or subvert the type system.
Static versus dynamic typing
In static typing all expressions have their types determined prior to the program being run (typically at compile-time). For example, 1 and (2+2) are integer expressions; they cannot be passed to a function that expects a string, or stored in a variable that is defined to hold dates.[29]
Statically typed languages can be either manifestly typed or type-inferred. In the first case, the programmer must explicitly write types at certain textual positions (for example, at variable declarations). In the second case, the compiler infers the types of expressions and declarations based on context. Most mainstream statically typed languages, such as C++, C# and Java, are manifestly typed. Complete type inference has traditionally been associated with less mainstream languages, such as Haskell and ML. However, many manifestly typed languages support partial type inference; for example, Java and C# both infer types in certain limited cases.[30]
Dynamic typing, also called latent typing, determines the type-safety of operations at runtime; in other words, types are associated with runtime values rather than textual expressions.[29] As with type-inferred languages, dynamically typed languages do not require the programmer to write explicit type annotations on expressions. Among other things, this may permit a single variable to refer to values of different types at different points in the program execution. However, type errors cannot be automatically detected until a piece of code is actually executed, making debugging more difficult. Ruby, Lisp, JavaScript, and Python are dynamically typed.
Weak and strong typing
Weak typing allows a value of one type to be treated as another, for example treating a string as a number.[29] This can occasionally be useful, but it can also allow some kinds of program faults to go undetected at compile time and even at runtime.
Strong typing prevents the above. An attempt to perform an operation on the wrong type of value raises an error.[29] Strongly typed languages are often termed type-safe or safe.
An alternative definition for "weakly typed" refers to languages, such as Perl and JavaScript, which permit a large number of implicit type conversions. In JavaScript, for example, the expression 2 * x implicitly converts x to a number, and this conversion succeeds even if x is null, undefined, an Array, or a string of letters. Such implicit conversions are often useful, but they can mask programming errors.
Strong and static are now generally considered orthogonal concepts, but usage in the literature differs. Some use the term strongly typed to mean strongly, statically typed, or, even more confusingly, to mean simply statically typed. Thus C has been called both strongly typed and weakly, statically typed.[31][32]
Execution semantics
Further information: Formal semantics of programming languages
Once data has been specified, the machine must be instructed to perform operations on the data. For example, the semantics may define the strategy by which expressions are evaluated to values, or the manner in which control structures conditionally execute statements. The execution semantics (also known as dynamic semantics) of a language defines how and when the various constructs of a language should produce a program behavior. There are many ways of defining execution semantics. Natural language is often used to specify the execution semantics of languages commonly used in practice. A significant amount of academic research went into formal semantics of programming languages, which allow execution semantics to be specified in a formal manner. Results from this field of research have seen limited application to programming language design and implementation outside academia.

COMPUTER

Computer

A computer is a programmable machine that receives input, stores and manipulates data, and provides output in a useful format.

Although mechanical examples of computers have existed through much of recorded human history, the first electronic computers were developed in the mid-20th century (1940–1945). These were the size of a large room, consuming as much power as several hundred modern personal computers (PCs).[1] Modern computers based on integrated circuits are millions to billions of times more capable than the early machines, and occupy a fraction of the space.[2] Simple computers are small enough to fit into small pocket devices, and can be powered by a small battery. Personal computers in their various forms are icons of the Information Age and are what most people think of as "computers". The embedded computers found in many devices from MP3 players to fighter aircraft and from toys to industrial robots are however the most numerous.

The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a certain minimum capability is, in principle, capable of performing the same tasks that any other computer can perform. Therefore computers ranging from a netbook to a supercomputer are all able to perform the same computational tasks, given enough time and storage capacity.

History of computing

Main article: History of computing hardware

The Jacquard loom, on display at the Museum of Science and Industry in Manchester, England, was one of the first programmable devices.

The first use of the word "computer" was recorded in 1613, referring to a person who carried out calculations, or computations, and the word continued to be used in that sense until the middle of the 20th century. From the end of the 19th century onwards though, the word began to take on its more familiar meaning, describing a machine that carries out computations.[3]

The history of the modern computer begins with two separate technologies—automated calculation and programmability—but no single device can be identified as the earliest computer, partly because of the inconsistent application of that term. Examples of early mechanical calculating devices include the abacus, the slide rule and arguably the astrolabe and the Antikythera mechanism (which dates from about 150–100 BC). Hero of Alexandria (c. 10–70 AD) built a mechanical theater which performed a play lasting 10 minutes and was operated by a complex system of ropes and drums that might be considered to be a means of deciding which parts of the mechanism performed which actions and when.[4] This is the essence of programmability.

The "castle clock", an astronomical clock invented by Al-Jazari in 1206, is considered to be the earliest programmable analog computer.[5] It displayed the zodiac, the solar and lunar orbits, a crescent moon-shaped pointer travelling across a gateway causing automatic doors to open every hour,[6][7] and five robotic musicians who played music when struck by levers operated by a camshaft attached to a water wheel. The length of day and night could be re-programmed to compensate for the changing lengths of day and night throughout the year.[5]

The Renaissance saw a re-invigoration of European mathematics and engineering. Wilhelm Schickard's 1623 device was the first of a number of mechanical calculators constructed by European engineers, but none fit the modern definition of a computer, because they could not be programmed.

In 1801, Joseph Marie Jacquard made an improvement to the textile loom by introducing a series of punched paper cards as a template which allowed his loom to weave intricate patterns automatically. The resulting Jacquard loom was an important step in the development of computers because the use of punched cards to define woven patterns can be viewed as an early, albeit limited, form of programmability.

It was the fusion of automatic calculation with programmability that produced the first recognizable computers. In 1837, Charles Babbage was the first to conceptualize and design a fully programmable mechanical computer, his analytical engine.[8] Limited finances and Babbage's inability to resist tinkering with the design meant that the device was never completed.

In the late 1880s, Herman Hollerith invented the recording of data on a machine readable medium. Prior uses of machine readable media, above, had been for control, not data. "After some initial trials with paper tape, he settled on punched cards ..."[9] To process these punched cards he invented the tabulator, and the keypunch machines. These three inventions were the foundation of the modern information processing industry. Large-scale automated data processing of punched cards was performed for the 1890 United States Census by Hollerith's company, which later became the core of IBM. By the end of the 19th century a number of technologies that would later prove useful in the realization of practical computers had begun to appear: the punched card, Boolean algebra, the vacuum tube (thermionic valve) and the teleprinter.

During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.

Alan Turing is widely regarded to be the father of modern computer science. In 1936 Turing provided an influential formalisation of the concept of the algorithm and computation with the Turing machine. Of his role in the modern computer, Time magazine in naming Turing one of the 100 most influential people of the 20th century, states: "The fact remains that everyone who taps at a keyboard, opening a spreadsheet or a word-processing program, is working on an incarnation of a Turing machine".[10]

The inventor of the program-controlled computer was Konrad Zuse, who built the first working computer in 1941 and later in 1955 the first computer based on magnetic storage.[11]

George Stibitz is internationally recognized as a father of the modern digital computer. While working at Bell Labs in November 1937, Stibitz invented and built a relay-based calculator he dubbed the "Model K" (for "kitchen table", on which he had assembled it), which was the first to use binary circuits to perform an arithmetic operation. Later models added greater sophistication including complex arithmetic and programmability.[12]

Defining characteristics of some early digital computers of the 1940s (In the history of computing hardware)

Name

First operational

Numeral system

Computing mechanism

Programming

Turing complete

Zuse Z3 (Germany)

May 1941

Binary

Electro-mechanical

Program-controlled by punched film stock (but no conditional branch)

Yes (1998)

Atanasoff–Berry Computer (US)

1942

Binary

Electronic

Not programmable—single purpose

No

Colossus Mark 1 (UK)

February 1944

Binary

Electronic

Program-controlled by patch cables and switches

No

Harvard Mark I – IBM ASCC (US)

May 1944

Decimal

Electro-mechanical

Program-controlled by 24-channel punched paper tape (but no conditional branch)

No

Colossus Mark 2 (UK)

June 1944

Binary

Electronic

Program-controlled by patch cables and switches

No

ENIAC (US)

July 1946

Decimal

Electronic

Program-controlled by patch cables and switches

Yes

Manchester Small-Scale Experimental Machine (Baby) (UK)

June 1948

Binary

Electronic

Stored-program in Williams cathode ray tube memory

Yes

Modified ENIAC (US)

September 1948

Decimal

Electronic

Program-controlled by patch cables and switches plus a primitive read-only stored programming mechanism using the Function Tables as program ROM

Yes

EDSAC (UK)

May 1949

Binary

Electronic

Stored-program in mercury delay line memory

Yes

Manchester Mark 1 (UK)

October 1949

Binary

Electronic

Stored-program in Williams cathode ray tube memory and magnetic drum memory

Yes

CSIRAC (Australia)

November 1949

Binary

Electronic

Stored-program in mercury delay line memory

Yes

A succession of steadily more powerful and flexible computing devices were constructed in the 1930s and 1940s, gradually adding the key features that are seen in modern computers. The use of digital electronics (largely invented by Claude Shannon in 1937) and more flexible programmability were vitally