Welcome to Computing & Technology of WebTripleT.com.  We've devoted ourselves to locate the best computer information across the nation. 
We've managed to Bring together many experts as well as a good amount of information, allowing us to share it across the nation.  Let's take a moment to learn more about technology from around the country as well as around the world.  With the help of many sources around the globe, WebTripleT.com has been able to focus on a great amount of computer & technology  information from across the nation to allow information to become shared amongst everyone.
Everything and anything will be found on WebTripleT.com, so just clicks away, that information that is desired will be found without leaving our site!
Advertisement
 
Search Our Site>>
Advertisement
 
 
Seconds to complete, Minutes to receive info
Communication/Network
Sponsored Links
Sponsored Links
Sponsored Links
EE History, definition, and Facts about Computers with the help of Wikipedia
WebTripleT Home   >  Computing & Technology
Computing & Technology
WebtripleT.com Top Offers for Computers
Seconds to complete, Minutes to receive information
Receive helpful information on computers today!
Related Resources
Locate the parts and software needed to upgrade your computer.  Locations of the best prices available.
Not sure which computer to buy.  So many computers out there to choose from.  Reviews from hundreds available.
Advertisement
Learn the history of the computer age.  Where it started, who started it, and why it's important for the future.
Enter a into a contest in a city near you to win a computer.  Thousands of Contests always available in cities nationwide!
 
 
No monthly memberships, no commitments. Pay as you go. Free DVD instantly at time of signup!
Download your FREE MP3s today. If you're not 100% satisfied simply cancel before your trial period ends and you'll never pay a dime
WebTripleT.com Marketplace
Top 10 Laptops
Advertisement
Get a home at a low monthly payment. Get approved today!
Top Value Desktop Computers
Computer     >     Technology     >    Network     >     Communication
Receive helpful information on computers  today!
Hardware
Internet/Online
Operating Systems
Programming
Software
Computer
From Wikipedia, the free encyclopedia
A computer is a machine designed for manipulating data according to a list of instructions known as a program.


A typical IBM-compatible PC
An Apple iMac G5Computers are versatile. In fact, they are universal information-processing machines. According to the Church–Turing thesis, a computer with a certain minimum threshold capability is in principle capable of performing the tasks of any other computer, from those of a personal digital assistant to a supercomputer, as long as time and memory capacity are not considerations. Therefore, the same computer designs may be adapted for tasks ranging from processing company payrolls to controlling unmanned spaceflights. Due to technological advancement, modern electronic computers are exponentially more capable than those of preceding generations (a phenomenon partially described by Moore's Law).
Computers take numerous physical forms. Early electronic computers were the size of a large room, and such enormous computing facilities still exist for specialized scientific computation — supercomputers — and for the transaction processing requirements of large companies, generally called mainframes. Smaller computers for individual use, called personal computers, and their portable equivalent, the laptop computer, are ubiquitous information-processing and communication tools and are perhaps what most non-experts think of as "a computer". However, the most common form of computer in use today is the embedded computer, small computers used to control another device. Embedded computers control machines from fighter aircraft to digital cameras.

Contents
1 History of computing
2 How computers work: the stored program architecture
3 Digital circuits
4 I/O devices
5 Programs
5.1 Libraries and operating systems
6 Computer applications
6.1 Networking and the Internet
7 Alternative computing models
8 Computing professions and disciplines
9 See also
9.1 Other computers
10 Notes and references

History of computing
Main article: History of computing
ENIAC — a milestone in computing historyOriginally, the term "computer" referred to a person who performed numerical calculations, often with the aid of a mechanical calculating device. Examples of these early calculating devices, the first ancestors of the computer, included the abacus and the Antikythera mechanism, an ancient Greek device for calculating the movements of planets, dating from about 87 BC.[1] The end of the Middle Ages saw a reinvigoration of European mathematics and engineering, and Wilhelm Schickard's 1623 device was the first of a number of European engineers to construct a mechanical calculator.[2] The abacus has been noted as being an early computer, as it was like a calculator in the past. In 1801, Joseph Marie Jacquard made an improvement to existing loom designs that used a series of punched paper cards as a program to weave intricate patterns. The resulting Jacquard loom is not considered a true computer but it was an important step in the development of modern digital computers. Charles Babbage was the first to conceptualize and design a fully programmable computer as early as 1820, but due to a combination of the limits of the technology of the time, limited finance, and an inability to resist tinkering with his design, the device was never actually constructed in his lifetime. A number of technologies that would later prove useful in computing, such as the punch card and the vacuum tube had appeared by the end of the 19th century, and large-scale automated data processing using punch cards was performed by tabulating machines designed by Hermann Hollerith.
During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated, special-purpose analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. These became increasingly rare after the development of the programmable digital computer.
A succession of steadily more powerful and flexible computing devices were constructed in the 1930s and 1940s, gradually adding the key features of modern computers, such as the use of digital electronics (largely invented by Claude Shannon in 1937)[3] and more flexible programmability. Defining one point along this road as "the first digital electronic computer" is exceedingly difficult. Notable achievements include the Atanasoff-Berry Computer (1937), a special-purpose machine that used valve-driven (vacuum tube) computation, binary numbers, and regenerative memory; the secret British Colossus computer (1944), which had limited programmability but demonstrated that a device using thousands of valves could be made reliable and reprogrammed electronically; the Harvard Mark I, a large-scale electromechanical computer with limited programmability (1944); the decimal-based American ENIAC (1946) — which was the first general purpose electronic computer, but originally had an inflexible architecture that meant reprogramming it essentially required it to be rewired; and Konrad Zuse's Z machines, with the electromechanical Z3 (1941) being the first working machine featuring automatic binary arithmetic and feasible programmability.
The team who developed ENIAC, recognizing its flaws, came up with a far more flexible and elegant design, which has become known as the Von Neumann architecture (or "stored program architecture"). This stored program architecture became the basis for virtually all modern computers. A number of projects to develop computers based on the stored program architecture commenced in the mid to late-1940s; the first of these were completed in Britain. The first to be up and running was the Small-Scale Experimental Machine, but the EDSAC was perhaps the first practical version that was developed.
The Apple II, an early personal computerValve (tube) driven computer designs were in use throughout the 1950s, but were eventually replaced with transistor-based computers, which were smaller, faster, cheaper, and much more reliable, thus allowing them to be commercially produced, in the 1960s. By the 1970s, the adoption of integrated circuit technology had enabled computers to be produced at a low enough cost to allow individuals to own a personal computer of the type familiar today.

How computers work: the stored program architecture
While the technologies used in computers have changed dramatically since the first electronic, general-purpose computers of the 1940s, most still use the stored program architecture (sometimes called the von Neumann architecture). The design made the universal computer a practical reality.
The architecture describes a computer with four main sections: the arithmetic and logic unit (ALU), the control circuitry, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by bundles of wires (called "buses" when the same bundle supports more than one data path) and are usually driven by a timer or clock (although other events could drive the control circuitry).
Conceptually, a computer's memory can be viewed as a list of cells. Each cell has a numbered "address" and can store a small, fixed amount of information. This information can either be an instruction, telling the computer what to do, or data, the information which the computer is to process using the instructions that have been placed in the memory. In principle, any cell can be used to store either instructions or data.
The ALU is in many senses the heart of the computer. It is capable of performing two classes of basic operations. The first is arithmetic operations; for instance, adding or subtracting two numbers together. The set of arithmetic operations may be very limited; indeed, some designs do not directly support multiplication and division operations (instead, users support multiplication and division through programs that perform multiple additions, subtractions, and other digit manipulations). The second class of ALU operations involves comparison operations: given two numbers, determining if they are equal, or if not equal which is larger.
The I/O systems are the means by which the computer receives information from the outside world, and reports its results back to that world. On a typical personal computer, input devices include objects like the keyboard and mouse, and output devices include computer monitors, printers and the like, but as will be discussed later a huge variety of devices can be connected to a computer and serve as I/O devices.
The control system ties this all together. Its job is to read instructions and data from memory or the I/O devices, decode the instructions, providing the ALU with the correct inputs according to the instructions, "tell" the ALU what operation to perform on those inputs, and send the results back to the memory or to the I/O devices. One key component of the control system is a counter that keeps track of what the address of the current instruction is; typically, this is incremented each time an instruction is executed, unless the instruction itself indicates that the next instruction should be at some other location (allowing the computer to repeatedly execute the same instructions).
Since the 1980s the ALU and control unit (collectively called a central processing unit or CPU) have typically been located on a single integrated circuit called a microprocessor.
The functioning of such a computer is in principle quite straightforward. Typically, on each clock cycle, the computer fetches instructions and data from its memory. The instructions are executed, the results are stored, and the next instruction is fetched. This procedure repeats until a halt instruction is encountered.
The set of instructions interpreted by the control unit, and executed by the ALU, are limited in number, precisely defined, and very simple operations. Broadly, they fit into one or more of of four categories: 1) moving data from one location to another (an example might be an instruction that "tells" the CPU to "copy the contents of memory cell 5 and place the copy in cell 10"; 2) executing arithmetic and logical processes on data (for instance, "add the contents of cell 7 to the contents of cell 13 and place the result in cell 20" 3) testing the condition of data ("if the contents of cell 999 are 0, the next instruction is at cell 30") 4) altering the sequence of operations (the previous example alters the sequence of operations, but instructions such as "the next instruction is at cell 100" are also standard).
Instructions, like data, are represented within the computer as binary code — a base two system of counting. For example, the code for one kind of "copy" operation in the Intel x86 line of microprocessors is 10110000 [4]. The particular instruction set that a specific computer supports is known as that computer's machine language. Using an already-popular machine language makes it much easier to run existing software on a new machine; consequently, in markets where commercial software availability is important suppliers have converged on one or a very small number of distinct machine languages.
Larger computers, such as some minicomputers, mainframe computers, servers, differ from the model above in one significant aspect; rather than one CPU they often have a number of them. Supercomputers often have highly unusual architectures significantly different from the basic stored-program architecture, sometimes featuring thousands of CPUs, but such designs tend to be useful only for specialized tasks.

Digital circuits
The conceptual design above could be implemented using a variety of different technologies. As previously mentioned, a stored program computer could be designed entirely of mechanical components like Babbage's. However, digital circuits allow Boolean logic and arithmetic using binary numerals to be implemented using relays — essentially, electrically controlled switches. Shannon's famous thesis showed how relays could be arranged to form units called logic gates, implementing simple Boolean operations. Others soon figured out that vacuum tubes — electronic devices, could be used instead. Vacuum tubes were originally used as a signal amplifier for radio and other applications, but were used in digital electronics as a very fast switch; when electricity is provided to one of the pins, current can flow through between the other two.
Through arrangements of logic gates, one can build digital circuits to do more complex tasks, for instance, an adder, which implements in electronics the same method — in computer terminology, an algorithm — to add two numbers together that children are taught — add one column at a time, and carry what's left over. Eventually, through combining circuits together, a complete ALU and control system can be built up. This does require a considerable number of components. CSIRAC, one of the earliest stored-program computers, is probably close to the smallest practically useful design. It had about 2,000 valves, some of which were "dual components"[5], so this represented somewhere between 2 and 4,000 logic components.
Vacuum tubes had severe limitations for the construction of large numbers of gates. They were expensive, unreliable (particularly when used in such large quantities), took up a lot of space, and used a lot of electrical power, and, while incredibly fast compared to a mechanical switch, had limits to the speed at which they could operate. Therefore, by the 1960s they were replaced by the transistor, a new device which performed the same task as the tube but was much smaller, faster operating, reliable, used much less power, and was far cheaper.
Integrated circuits are the basis of modern digital computing hardware.In the 1960s and 1970s, the transistor itself was gradually replaced by the integrated circuit, which placed multiple transistors (and other components) and the wires connecting them on a single, solid piece of silicon. By the 1970s, the entire ALU and control unit, the combination becoming known as a CPU, were being placed on a single "chip" called a microprocessor. Over the history of the integrated circuit, the number of components that can be placed on one has grown enormously. The first IC's contained a few tens of components; as of 2006, the Intel Core Duo processor contains 151 million transistors [6].
Tubes, transistors, and transistors on integrated circuits can be used as the "storage" component of the stored-program architecture, using a circuit design known as a flip-flop, and indeed flip-flops are used for small amounts of very high-speed storage. However, few computer designs have used flip-flops for the bulk of their storage needs. Instead, earliest computers stored data in Williams tubes — essentially, projecting some dots on a TV screen and reading them again, or mercury delay lines where the data was stored as sound pulses traveling slowly (compared to the machine itself) along long tubes filled with mercury. These somewhat ungainly but effective methods were eventually replaced by magnetic memory devices, such as magnetic core memory, where electrical currents were used to introduce a permanent (but weak) magnetic field in some ferrous material, which could then be read to retrieve the data. Eventually, DRAM was introduced. A DRAM unit is a type of integrated circuit containing huge banks of an electronic component called a capacitor which can store an electrical charge for a period of time. The level of charge in a capacitor could be set to store information, and then measured to read the information when required.

I/O devices
I/O (short for input/output) is a general term for devices that send computers information from the outside world and that return the results of computations. These results can either be viewed directly by a user, or they can be sent to another machine, whose control has been assigned to the computer: In a robot, for instance, the controlling computer's major output device is the robot itself.
The first generation of computers were equipped with a fairly limited range of input devices. A punch card reader, or something similar, was used to enter instructions and data into the computer's memory, and some kind of printer, usually a modified teletype, was used to record the results. Over the years, other devices have been added. For the personal computer, for instance, keyboards and mice are the primary ways people directly enter information into the computer; and monitors are the primary way in which information from the computer is presented back to the user, though printers, speakers, and headphones are common, too. There is a huge variety of other devices for obtaining other types of input. One example is the digital camera, which can be used to input visual information. There are two prominent classes of I/O devices. The first class is that of secondary storage devices, such as hard disks, CD-ROMs, key drives and the like, which represent comparatively slow, but high-capacity devices, where information can be stored for later retrieval; the second class is that of devices used to access computer networks. The ability to transfer data between computers has opened up a huge range of capabilities for the computer. The global Internet allows millions of computers to transfer information of all types between each other.

Programs
Computer programs are simply lists of instructions for the computer to execute. These can range from just a few instructions which perform a simple task, to a much more complex instruction list which may also include tables of data. Many computer programs contain millions of instructions, and many of those instructions are executed repeatedly. A typical modern PC (in the year 2005) can execute around 3 billion instructions per second. Computers do not gain their extraordinary capabilities through the ability to execute complex instructions. Rather, they do millions of simple instructions arranged by people known as programmers.
In practice, people do not normally write the instructions for computers directly in machine language. Such programming is incredibly tedious and highly error-prone, making programmers very unproductive. Instead, programmers describe the desired actions in a "high level" programming language which is then translated into the machine language automatically by special computer programs (interpreters and compilers). Some programming languages map very closely to the machine language, such as Assembly Language (low level languages); at the other end, languages like Prolog are based on abstract principles far removed from the details of the machine's actual operation (high level languages). The language chosen for a particular task depends on the nature of the task, the skill set of the programmers, tool availability and, often, the requirements of the customers (for instance, projects for the US military were often required to be in the Ada programming language).

Computer software is an alternative term for computer programs; it is a more inclusive phrase and includes all the ancillary material accompanying the program needed to do useful tasks. For instance, a video game includes not only the program itself, but also data representing the pictures, sounds, and other material needed to create the virtual environment of the game. A computer application is a piece of computer software provided to many computer users, often in a retail environment. The stereotypical modern example of an application is perhaps the office suite, a set of interrelated programs for performing common office tasks.

Going from the extremely simple capabilities of a single machine language instruction to the myriad capabilities of application programs means that many computer programs are extremely large and complex. A typical example is Windows XP, created from roughly 40 million lines of computer code in the C++ programming language;[7] there are many projects of even bigger scope, built by large teams of programmers. The management of this enormous complexity is key to making such projects possible; programming languages, and programming practices, enable the task to be divided into smaller and smaller subtasks until they come within the capabilities of a single programmer in a reasonable period.

Nevertheless, the process of developing software remains slow, unpredictable, and error-prone; the discipline of software engineering has attempted, with some partial success, to make the process quicker and more productive and improve the quality of the end product.

Libraries and operating systems
Soon after the development of the computer, it was discovered that certain tasks were required in many different programs; an early example was computing some of the standard mathematical functions. For the purposes of efficiency, standard versions of these were collected in libraries and made available to all who required them. A particularly common task set related to handling the gritty details of "talking" to the various I/O devices, so libraries for these were quickly developed.
By the 1960s, with computers in wide industrial use for many purposes, it became common for them to be used for many different jobs within an organization. Soon, special software to automate the scheduling and execution of these many jobs became available. The combination of managing "hardware" and scheduling jobs became known as the "operating system"; the classic example of this type of early operating system was OS/360 by IBM.[8]
The next major development in operating systems was timesharing — the idea that multiple users could use the machine "simultaneously" by keeping all of their programs in memory, executing each user's program for a short time so as to provide the illusion that each user had their own computer. Such a development required the operating system to provide each user's programs with a "virtual machine" such that one user's program could not interfere with another's (by accident or design). The range of devices that operating systems had to manage also expanded; a notable one was hard disks; the idea of individual "files" and a hierarchical structure of "directories" (now often called folders) greatly simplified the use of these devices for permanent storage. Security access controls, allowing computer users access only to files, directories and programs they had permissions to use, were also common.
Perhaps the last major addition to the operating system were tools to provide programs with a standardized graphical user interface. While there are few technical reasons why a GUI has to be tied to the rest of an operating system, it allows the operating system vendor to encourage all the software for their operating system to have a similar looking and acting interface.
Outside these "core" functions, operating systems are usually shipped with an array of other tools, some of which may have little connection with these original core functions but have been found useful by enough customers for a provider to include them. For instance, Apple's Mac OS X ships with a digital video editor application.
Operating systems for smaller computers may not provide all of these functions. The operating systems for early microcomputers with limited memory and processing capability did not, and Embedded computers typically have specialized operating systems or no operating system at all, with their custom application programs performing the tasks that might otherwise be delegated to an operating system.

Computer applications
Computer-controlled robots are now common in industrial manufacture.
Computer-generated imagery (CGI) is a central ingredient in motion picture visual effects. The seawater creature in The Abyss (1989) marked the acceptance of CGI in the visual effects industry.
Furby: many modern, mass-produced toys would not be possible without low-cost embedded computers.The first digital computers, with their large size and cost, mainly performed scientific calculations, often to support military objectives. The ENIAC was originally designed to calculate ballistics-firing tables for artillery, but it was also used to calculate neutron cross-sectional densities to help in the design of the hydrogen bomb,[9][10] significantly speeding up its development. (Many of the most powerful supercomputers available today are also used for nuclear weapons simulations.) The CSIR Mk I, the first Australian stored-program computer, was amongst many other tasks used for the evaluation of rainfall patterns for the catchment area of the Snowy Mountains Scheme, a large hydroelectric generation project[11] Others were used in cryptanalysis, for example the first programmable (though not general-purpose) digital electronic computer, Colossus, built in 1943 during World War II. Despite this early focus of scientific and military engineering applications, computers were quickly used in other areas.
From the beginning, stored program computers were applied to business problems. The LEO, a stored program-computer built by J. Lyons and Co. in the United Kingdom, was operational and being used for inventory management and other purposes 3 years before IBM built their first commercial stored-program computer. Continual reductions in the cost and size of computers saw them adopted by ever-smaller organizations. Moreover, with the invention of the microprocessor in the 1970s, it became possible to produce inexpensive computers. In the 1980s, personal computers became popular for many tasks, including book-keeping, writing and printing documents, calculating forecasts and other repetitive mathematical tasks involving spreadsheets.
As computers have become less expensive, they have been used extensively in the creative arts as well. Sound, still pictures, and video are now routinely created (through synthesizers, computer graphics and computer animation), and near-universally edited by computer. They have also been used for entertainment, with the video game becoming a huge industry.
Computers have been used to control mechanical devices since they became small and cheap enough to do so; indeed, a major spur for integrated circuit technology was building a computer small enough to guide the Apollo missions[12][13] two of the first major applications for embedded computers. Today, it is almost rarer to find a powered mechanical device not controlled by a computer than to find one that is at least partly so. Perhaps the most famous computer-controlled mechanical devices are robots, machines with more-or-less human appearance and some subset of their capabilities. Industrial robots have become commonplace in mass production, but general-purpose human-like robots have not lived up to the promise of their fictional counterparts and remain either toys or research projects.
Robotics, indeed, is the physical expression of the field of artificial intelligence, a discipline whose exact boundaries are fuzzy but to some degree involves attempting to give computers capabilities that they do not currently possess but humans do. Over the years, methods have been developed to allow computers to do things previously regarded as the exclusive domain of humans — for instance, "read" handwriting, play chess, or perform symbolic integration. However, progress on creating a computer that exhibits "general" intelligence comparable to a human has been extremely slow.

Networking and the Internet
Computers have been used to coordinate information in multiple locations since the 1950s, with the US military's SAGE system the first large-scale example of such a system, which led to a number of special-purpose commercial systems like Sabre.
In the 1970s, computer engineers at research institutions throughout the US began to link their computers together using telecommunications technology. This effort was funded by ARPA, and the computer network that it produced was called the ARPANET. The technologies that made the Arpanet possible spread and evolved. In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of the computer. In the phrase of John Gage and Bill Joy (of Sun Microsystems), "the network is the computer". Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s the spread of applications like e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies like Ethernet and ADSL saw computer networking become ubiquitous almost everywhere. In fact, the number of computers that are networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information.[14] "Wireless" networking, often utilizing mobile phone networks, has meant networking is becoming increasingly ubiquitous even in mobile computing environments.

Alternative computing models
Despite the massive gains in speed and capacity over the history of the digital computer, there are many tasks for which current computers are indequate. For some of these tasks, conventional computers are fundamentally inadequate, because the time taken to find a solution grows very quickly as the size of the problem to be solved expands. Therefore, there has been research interest in some computer models that use biological processes, or the oddities of quantum physics, to tackle these types of problems. For instance, DNA computing is proposed to use biological processes to solve certain problems. Because of the exponential division of cells, a DNA computing system could potentially tackle a problem in a massively parallel fashion. However, such a system is limited by the maximum practical mass of DNA that can be handled.
Quantum computers, as the name implies, take advantage of the unusual world of quantum physics. If a practical quantum computer is ever constructed, there are a limited number of problems for which the quantum computer is fundamentally faster than a standard computer. However, these problems, relating to cryptography and, unsurprisingly, quantum physics simulations, are of considerable practical interest.
These alternative models for computation remain research projects at the present time, and will likely find application only for those problems where conventional computers are inadequate.

Computing professions and disciplines
In the developed world, virtually every profession makes use of computers. However, certain professional and academic disciplines have evolved that specialize in techniques to construct, program, and use computers. Terminology for different professional disciplines is still somewhat fluid and new fields emerge from time to time: however, some of the major groupings are as follows:
Computer engineering is the branch of electrical engineering that focuses both on hardware and software design, and the interaction between the two.
Computer science is an academic study of the processes related to computation, such as developing efficient algorithms to perform specific tasks. It tackles questions as to whether problems can be solved at all using a computer, how efficiently they can be solved, and how to construct efficient programs to compute solutions. A huge array of specialties has developed within computer science to investigate different classes of problems.
Software engineering concentrates on methodologies and practices to allow the development of reliable software systems while minimizing, and reliably estimating, costs and timelines.
Information systems concentrates on the use and deployment of computer systems in a wider organizational (usually business) context.
Many disciplines have developed at the intersection of computers with other professions; one of many examples is experts in geographical information systems who apply computer technology to problems of managing geographical information.
There are two major professional societies dedicated to computers, the Association for Computing Machinery and IEEE Computer Society.

For more information on Computer, please visit
Wikipedia.
Technology
From Wikipedia, the free encyclopedia

By the mid 20th century humans had achieved a level of technological mastery sufficient to leave the surface of the planet for the first time and explore space.Technology is a word with origins in the Greek word technologia (te???????a), techne (t????) "craft" and logia (????a) "saying." It is a broad term dealing with the use and knowledge of humanity's tools and crafts.

Content
1 Different meanings of technology
1.1 Technology as tool
1.2 Technology as technique
1.3 Technology as a cultural force
2 Science, engineering and technology
3 History of technology
3.1 Electronics
4 Economics and technological development
4.1 Funding
4.1.1 Government funding for new technology
4.1.2 Private funding
4.2 Other economic considerations
5 Sociological factors and effects
5.1 Values
5.2 Ethics
5.3 Lifestyle
5.4 Institutions and groups
5.5 International
6 Environment
7 Control
7.1 Autonomous technology
7.2 Government
7.3 Choice
8 Technology and philosophy
8.1 Technicism
8.2 Optimism, pessimism and appropriate technology
8.2.1 Pessimism
8.2.2 Optimism
8.2.3 Attitude towards technology
8.2.4 Appropriate technology
8.3 Theories and concepts in technology
9 See also
10 References
11 Bibliography

Different meanings of technology
Depending on context, the word technology has the following definitions and uses:

Technology as tool

Robert Hooke's microscope (1655)Technology can be most broadly defined as the material entities created by the application of mental and physical effort to nature in order to achieve some value. In its most common use, technology refers to tools and machines that may be used to help solve problems. In this use, technology is a far-reaching term that may include both simple tools, such as a wooden spoon, and complex tools, such as a space station or the written sets of procedures and maintenance manuals for it.

Technology as technique
In this use, technology is the current state of our knowledge of how to combine resources to produce desired products, to solve problems, fulfill needs, or satisfy wants. Technology in this sense includes technical methods, skills, processes, techniques, tools and raw materials (for example, in such uses as computer technology, construction technology, or medical technology).

Technology as a cultural force
Technology can also be viewed as an activity that forms or changes culture (such as in manufacturing technology, infrastructure technology, or space-travel technology). (McGinn)
As a cultural activity, technology predates both science and engineering, each of which formalize some aspects of technological endeavor. This is not to imply that technology is the only, or the dominant, culture-forming activity. Culture itself acts strongly upon, and shapes, the form and nature of technology. However, due to the increasingly widespread use of ever more complex technologies and their frequently unintended consequences, problems may arise in their use, many of which have been separately studied. Such topics include technological ethics, environmental effects, technological byproducts, and technological risk. The cultural force of technology (e.g., as seen in the invention of writing) may be said to be the driving force that sets us apart from other species.

Science, engineering and technology
The distinctions between science, engineering and technology are not always clear. Generally, science is the reasoned investigation or study of nature, aimed at discovering enduring relationships (principles) among elements of the (phenomenal) world. It generally employs formal techniques, i.e., some set of established rules of procedure, such as the scientific method. Engineering is the use of scientific principles to achieve a planned result. However, technology broadly involves the use and application of knowledge (e.g., scientific, engineering, mathematical, language, and historical), both formally and informally, to achieve some "practical" result (Roussel, et al.).

For example, science might study the flow of electrons in electrical conductors. This knowledge may then be used by engineers to create artifacts, such as semiconductors, computers, and other forms of advanced technology. In this sense, scientists and engineers may both be considered technologists, but scientists generally less so.

History of technology
Main article: History of technology
See also: Timeline of invention
See also: History of science and technology
Flint spear, circa 100,000BCThe history of technology is at least as old as humanity. Some primitive forms of tools have been discovered with almost every find of ancient human remains dating from the time of homo habilis). Nevertheless, other animals have been found to use tools—and to learn to use and refine tools—so it is incorrect to distinguish humans as the only tool-using or tool-making animal. The history of technology follows a progression from simple tools and simple (mostly human) energy sources to complex high-technology tools and energy sources.
The earliest technologies converted readily occurring natural resources (such as rock, wood and other vegetation, bone and other animal byproducts) into simple tools. Processes such as carving, chipping, scraping, rolling (the wheel), and sun-baking are simple means for the conversion of raw materials into usable products. Anthropologists have uncovered many early human habitations and tools made from natural resources. Birds and other animals often build elaborate nests and some simple tools out of various materials. We normally don't consider them to be performing a technological feat, primarily because such behavior is largely instinctive. There is some evidence of occasional cultural transferrence, especially among the other, nonhuman primates. Nevertheless, there is now considerable evidence of such simple technology among animals other than humans.
The use, and then mastery, of fire (circa 1,000,000 - 500,000 BC [1]) was a turning point in the technological evolution of humankind, affording a simple energy source with many profound uses. Perhaps the first use of fire beyond providing heat was the preparation of food. This enabled a significant increase in the vegetable and animal sources of food, while greatly reducing perishability.
The use of fire extended the capability for the treatment of natural resources and allowed the use of natural resources that require heat to be useful. (The oldest projectile found is a wooden spear with fire hardened point, circa 250,000 BC.) Wood and charcoal were among the first materials used as a fuel. Wood, clay, and rock (such as limestone), were among the earliest materials shaped or treated by fire, for making artifacts such as weapons, pottery, bricks, and cement. Continuing improvements led to the furnace and bellows and provided the ability to smelt and forge native metals (naturally occurring in relatively pure form). Gold, copper, silver, and lead, were such early metals. The advantages of copper tools over stone, bone, and wooden tools were quickly apparent to early humans, and native copper was probably used from near the beginning of Neolithic times (about 8000 BCE). Native copper does not naturally occur in large amounts, but copper ores are quite common and some of them produce metal easily when burned in wood or charcoal fires.
The wheel was invented circa 4000 BCE.Eventually, the working of metals led to the discovery of alloys such as bronze and brass (about 4000 BCE). The first uses of iron alloys such as steel dates to around 1400 BCE.
Meanwhile, humans were learning to harness other forms of energy. The earliest known use of wind power is the sailboat. The earliest record of a ship under sail is shown on an Egyptian pot dating back to 3200 BCE. From prehistoric times, Egyptians probably used "the power of the Nile" annual floods to irrigate their lands, gradually learning to regulate much of it through purposely built irrigation channels and 'catch' basins. Similarly, the early peoples of Mesopotamia, the Sumerians, learned to use the Tigris and Euphrates rivers for much the same purposes. But more extensive use of wind and water (and even human) power required another invention.
It is still a mystery as to who invented the wheel and when and why it was invented. According to some archaeologists, it was probably originally invented about 8000 B.C. The wheel was almost certainly independently invented in Mesopotamia -— present-day Iraq. Estimates on when this may have occurred range from 5500 to 3000 B.C., with most guesses closer to a 4000 B.C. date. The oldest artifacts with drawings that depict wheeled carts date from about 3000 B.C., though for all anyone knows, the wheel may have been in use for millennia before these drawings were made. But there is also evidence from the same period of time that wheels were used for the production of pottery. (Note that the original potter's wheel was probably not a wheel -— but rather an irregularly shaped slab of flat wood with a small hollowed or pierced area near the center and mounted on a peg driven into the earth. It would have been rotated by repeated tugs by the potter or his assistant.) More recently, the oldest-known wooden wheel in the world was found in the Ljubljana marshes of Slovenia.
The invention of the wheel revolutionized activities as disparate as transportation, war, and the production of pottery (for which it may have been first used). It didn't take long to discover that wheeled wagons could be used to carry heavy loads and fast (rotary) potters' wheels enabled early mass production of pottery. But it was the use of the wheel as a transformer of energy (through water wheels and windmills and even treadmills) that revolutionized the application of nonhuman power sources.
Tools include both simple machines (such as the lever, the screw, and the pulley), and more complex machines (such as the clock, the engine, the electric generator and the electric motor, the computer, radio, and the Space Station, among many others).
Integrated circuitAs tools increase in complexity, so does the type of knowledge needed to support them. Complex modern machines require libraries of written technical manuals of collected information that has continually increased and improved -— their designers, builders, maintainers, and users often require the mastery of decades of sophisticated general and specific training. Moreover, these tools have become so complex that a comprehensive infrastructure of technical knowledge-based lesser tools, processes and practices (complex tools in themselves) exist to support them, including engineering, medicine, and computer science. Complex manufacturing and construction techniques and organizations are needed to construct and maintain them. Entire industries have arisen to support and develop succeeding generations of increasingly more complex tools.

Electronics
Growth of transistor counts for Intel processors (dots) and Moore's Law (upper line=18 months; lower line=24 months)Electronics items have grown ever smaller and more sophisticated. Components have progressed from vacuum tubes, through transistors, to integrated circuits.
Moore's law is about the empirical observation, that at our rate of technological development, the complexity of an integrated circuit, with respect to minimum component cost, will double approximately every 18 months.
It is attributed to Gordon E. Moore,[1] a co-founder of Intel. However, Moore had heard Douglas Engelbart's similar observation possibly in 1965. Engelbart, a co-inventor of today's mechanical computer mouse, believed that the ongoing improvement of integrated circuits would eventually make interactive computing feasible.
These influence of computers was pervasive in the workplace, radically changing the way people bought goods and services and kept accounts. The personal computer was created and became a common fixture in the home. As computers became smaller, it became possible to have a handheld version of a telephone and other devices.
Electronics technology allows people to receive news and entertainment from all over the world. It also allows an oppressive governments to spread propaganda and learn about the private lives of its citizens.

Economics and technological development
Nuclear reactor, Doel, BelgiumEconomics can be said to have arrived on the scene when the occasional, spontaneous exchange of goods and services began to occur on a less occasional, less spontaneous basis. It probably didn't take long for the maker of arrowheads to realize that he could probably do a lot better by concentrating on the making of arrowheads and barter for his other needs. Clearly, regardless of the goods and services bartered, some amount of technology was involved—if no more than in the making of shell and bead jewelry. Even the shaman's potions and sacred objects can be said to have involved some technology. So, from the very beginnings, technology can be said to have spurred the development of more elaborate economies.

In the modern world, superior technologies, resources, geography, and history give rise to robust economies; and in a well-functioning, robust economy, economic excess naturally flows into greater use of technology. Moreover, because technology is such an inseparable part of human society, especially in its economic aspects, funding sources for (new) technological endeavors are virtually illimitable. However, while in the beginning, technological investment involved little more than the time, efforts, and skills of one or a few men, today, such investment may involve the collective labor and skills of many millions.

Funding
Consequently, the sources of funding for large technological efforts have dramatically narrowed, since few have ready access to the collective labor of a whole society, or even a large part. It is conventional to divide up funding sources into governmental (involving whole, or nearly whole, social enterprises) and private (involving more limited, but generally more sharply focused) business or individual enterprises.

Government funding for new technology
The government is a major contributor to the development of new technology in many ways. In the United States alone, many government agencies specifically invest billions of dollars in new technology.
[In 1980, the UK government invested just over 6 million pounds in a four-year programme, later extended to six years, called the Microelectronics Education Programme (MEP), which was intended to give every school in Britain at least one computer, microprocessor training materials and software, and extensive teacher training. Similar programmes have been instituted by governments around the world.]
Technology has frequently been driven by the military, with many modern applications being developed for the military before being adapted for civilian use. However, this has always been a two-way flow, with industry often taking the lead in developing and adopting a technology which is only later adopted by the military.
Entire government agencies are specifically dedicated to research, such as America's National Science Foundation, the United Kingdom's scientific research institutes, America's Small Business Innovative Research effort. Many other government agencies dedicate a major portion of their budget to research and development.

Private funding
Research and development is one of the biggest areas of investments made by corporations toward new and innovative technology.
Many foundations and other nonprofit organizations contribute to the development of technology. In the OECD, about two-thirds of research and development in scientific and technical fields is carried out by industry, and 20 percent and 10 percent respectively by universities and government. But in poorer countries such as Portugal and Mexico the industry contribution is significantly less. The U.S. government spends more than other countries on military research and development, although the proportion has fallen from about 30 percent in the 1980s to less than 20 percent. [2]

Other economic considerations
Intermediate technology, more of an economics concern, refers to compromises between central and expensive technologies of developed nations and those which developing nations find most effective to deploy given an excess of labour and scarcity of cash. In general, a so-called "appropriate" technology will also be "intermediate".
Persuasion technology: In economics, definitions or assumptions of progress or growth are often related to one or more assumptions about technology's economic influence. Challenging prevailing assumptions about technology and its usefulness has led to alternative ideas like uneconomic growth or measuring well-being. These, and economics itself, can often be described as technologies, specifically, as persuasion technology.
Technocapitalism
Technological diffusion
Technology acceptance model
Technology lifecycle
Technology transfer
Sociological factors and effects

Downtown Tokyo (2005)The use of technology has a great many effects; these may be separated into intended effects and unintended effects. Unintended effects are usually also unanticipated, and often unknown before the arrival of a new technology. Nevertheless, they are often as important as the intended effect.
The most subtle side effects of technology are often sociological. They are subtle because the side effects may go unnoticed unless carefully observed and studied. These may involve gradually occurring changes in the behavior of individuals, groups, institutions, and even entire societies.

Values
The implementation of technology influences the values of a society by changing expectations and realities. The implementation of technology is also influenced by values. There are (at least) three major, interrelated values that inform, and are informed by, technological innovations:
Mechanistic world view: Viewing the universe as a collection of parts, (like a machine), that can be individually analyzed and understood (McGinn). This is a form of reductionism that is rare nowadays. However, the "neo-mechanistic world view" holds that nothing in the universe cannot be understood by the human intellect. Also, while all things are greater than the sum of their parts (e.g., even if we consider nothing more than the information involved in their combination), in principle, even this excess must eventually be understood by human intelligence. That is, no divine or vital principle or essence is involved.
Efficiency: A value, originally applied only to machines, but now applied to all aspects of society, so that each element is expected to attain a higher and higher percentage of its maximal possible performance, output, or ability. (McGinn)
Social progress: The belief that there is such a thing as social progress, and that, in the main, it is beneficent. Before the Industrial Revolution, and the subsequent explosion of technology, almost all societies believed in a cyclical theory of social movement and, indeed, of all history and the universe. This was, obviously, based on the cyclicity of the seasons, and an agricultural economy's and society's strong ties to that cyclicity. Since much of the world (i.e., everyone but the hyperindustrialized West) is closer to their agricultural roots, they are still much more amenable to cyclicity than progress in history. This may be seen, for example, in Prabhat rainjan sarkar's modern social cycles theory. For a more westernized version of social cyclicity, see Generations : The History of America's Future, 1584 to 2069 (Paperback) by Neil Howe and William Strauss; Harper Perennial; Reprint edition (September 30, 1992); ISBN 0688119123, and subsequent books by these authors.
This section is a stub. You can help by adding to it.
Ethics
Winston provides an excellent summary of the ethical implications of technological development and deployment. He states there are four major ethical implications:
Challenges traditional ethical norms.
Creates an aggregation of effects.
Changes the distribution of justice.
Provides great power.
This section is a stub. You can help by adding to it.
But the most important contribution of technology is making life of common people much easier and helping them achieve what was previously not possible.

Lifestyle
Technology, throughout history, has allowed people to complete more tasks in less time and with less energy. Many herald this as a way of making life easier. However, work has continued to be proportional to the amount of energy expended, rather than the quanitiative amount of information or material processed. Technology has had profound effects on liftestyle throughout human history, and as the rate of progress increases, society must deal with both the good and bad implications.
In many ways, technology simplifies life.
The rise of a leisure class
A more informed society can make quicker responses to events and trends
Sets the stage for more complex learning tasks
Increases multi-tasking
Global Networking
Creates denser social circles
Cheap price
In other ways, technology complicates life.
Sweatshops and harsher forms of slavery are more likely to be found in technologically advanced societies, relative to primitive societies. However, the replacement of workers with machines and social progress such as emancipation transcends this in many post-industrial societies.
The increasing disparity between technologically advanced societies and those who are not.
More people are now starving now that at any point in history or pre-history, however the majority of these people live in subsistence and therefore less technological societies. This is also relative to the world's population explosion. Technology, such as genetics, hopes to alleviate the stress put on resources.
Work to drive to drive to work to work to drive -- consequently dealing with the traffic jams. The increase in transportation technology has brought congestion in some areas.
Technicism
New forms of danger existing as a consequence of new forms of technology, such as the first generation of nuclear reactors.
New forms of entertainment, such as video games and internet access could have possible social effects on areas such as academic performance.
Creates new diseases and disorders such as obesity, laziness and a loss of personality.
This section is a stub. You can help by adding to it.

Institutions and groups
Technology often enables organizational and bureaucratic group structures that otherwise and heretofore were simply not possible. Example of this might include:
The rise of very large organizations: e.g., governments, the military, health and social welfare institutions, supranational corporations.
The commercialization of leisure: sports events, products, etc. (McGinn)
The almost instantaneous dispersal of information (especially news) and entertainment around the world.
This section is a stub. You can help by adding to it.

International
Technology enables greater knowledge of international issues, values, and cultures. Due mostly to mass transportation and mass media, the world seems to be a much smaller place, due to the following, among others:
Globalization of ideas
Embeddedness of values
Population growth and control
Others
This section is a stub. You can help by adding to it.

Environment
Some technologies have negative environmental side effects, such as pollution and lack of sustainability. Some technologies are designed specifically with the environment in mind, but most are designed first for economic or ergonomic effects.
The effects of technology on the environment are both obvious and subtle. The more obvious effects include the depletion of nonrenewable natural resources (such as petroleum, coal, ores), and the added pollution of air, water, and land. The more subtle effects include debates over long-term effects (e.g., global warming, deforestation, natural habitat destruction, coastal wetland loss)
Each wave of technology creates a set of waste previously unknown by humans: toxic waste, radioactive waste, electronic waste.
This section is a stub. You can help by adding to it.

Control

Autonomous technology
In one line of thought, technology develops autonomously, in other words, technology seems to feed on itself, moving forward with a force irresistible by humans. To these individuals, technology is "inherently dynamic and self-augmenting." (McGinn, p. 73)
Jacques Ellul is one proponent of the irresistibleness of technology to humans. He espouses the idea that humanity cannot resist the temptation of expanding our knowledge and our technological abilities. However, he does not believe that this seeming autonomy of technology is inherent. But the perceived autonomy is due to the fact that humans do not adequately consider the responsibility that is inherent in technological processes.
Another proponent of these ideas is Langdon Winner who believes that technological evolution is essentially beyond the control of individuals or society.

Government
Individuals rely on governmental assistance to control the side effects and negative consequences of technology.
Supposed independence of government. An assumption commonly made about the government is that their governance role is neutral or independent. Often, if not usually, that assumption is misplaced. Governing is a political process, more so in some countries than in others, therefore government will be influenced by political winds of influence. In addition, government provides much of the funding for technological research and development. Therefore, even government has a vested interest in certain outcomes.
Liability. One means for controlling technology is to place responsibility for the harm with the agent causing the harm. Government can allow more or less legal liability to fall to the organizations or individuals responsibile for damages.
Legislation.
This section is a stub. You can help by adding to it.

Choice
Society also controls technology through the choices it makes. These choices not only include consumer demands; they also include:
the channels of distribution, how do products go from raw materials to consumption to disposal;
the cultural beliefs regarding style, freedom of choice, consumerism, materialism, etc.;
the economic values we place on the environment, individual wealth, government control, capitalism, etc.
This section is a stub. You can help by adding to it.

Technology and philosophy

Technicism
Generally, technicism is an over reliance or overconfidence in technology as a benefactor of society.
Taken to extreme, some argue that technicism is the belief that humanity will ultimately be able to control the entirety of existence using technology. In other words, human beings will eventually be able to master all problems, supply all wants and needs, possibly even control the future. (For a more complete treatment of the topic, see the work of Egbert Schuurman, for example at [3].) Some, such as Monsma, et al., connect these ideas to the abdication of God as a higher moral authority.
More commonly, technicism is a criticism of the commonly held belief that newer, more recently-developed technology is "better." For example, more recently-developed computers are faster than older computers, and more recently-developed cars have greater gas efficiency and more features than older cars. Because current technologies are generally accepted as good, future technological developments are not considered circumspectly, resulting in what seems to be a blind acceptance of technological developments.

Optimism, pessimism and appropriate technology

Pessimism
On the somewhat pessimistic side are certain philosophers like Herbert Marcuse, Jacques Ellul, and John Zerzan, who believe that technological societies are inherently flawed a priori. They suggest that the result of such a society is to become evermore technological at the cost of freedom and psychological health (and probably physical health in general, as pollution from technological products is dispersed).
Perhaps the most poignant criticisms of technology are found in what are now considered to be dystopian literary classics, for example Aldous Huxley's Brave New World and other writings, Anthony Burgess's A Clockwork Orange, and George Orwell's Nineteen Eighty-Four.

Optimism
On the other hand, the optimistic assumptions are made by proponents of views or ideologies such as transhumanism and singularitarianism, that view technological development as generally having beneficial effects for the society and the human condition. In these ideologies, technological development is morally good. Some critics see these ideologies as examples of scientism or techno-utopianism and fear the idea of technological singularity which they support. Some technological optimists are Karl Marx and James Hughes.

Attitude towards technology
The difference in a persons attitude towards technology is dependent on that persons exposure to it rather than the notion that it is age that determines positivity/negativity.

Appropriate technology
The notion of appropriate technology, however, was developed in the 20th century to describe situations where it was not desirable to use very new technologies or those that required access to some centralized infrastructure or parts or skills imported from elsewhere. The eco-village movement emerged in part due to this concern.
Campus Center for Appropriate Technology Humboldt State University
Appropriate Technology Projects Wiki Humboldt State University
National Center for Appropriate Technology
Student Projects Appropriate Technology Humboldt State University

Theories and concepts in technology
There are many theories and concepts that seek to explain the relationship between technology and society:
Appropriate technology
Diffusion of innovations
Jacques Ellul (technological society)
Posthumanism
Precautionary principle
Strategy of technology
Techno-progressivism
Technocriticism
Technological evolution (Radovan Richta)
Technological determinism
Technological singularity
Technorealism
Transhumanism

For more information on Technology, please visit
Wikipedia.
Computer networking
From Wikipedia, the free encyclopedia

Network cards such as this one can transmit data at high rates over ethernet cables.Computer networking is the scientific and engineering discipline concerned with communication between computer systems. Such networks involves at least two computers, which can be separated by a few inches (e.g. via Bluetooth) or thousands of miles (e.g. via the Internet). Computer networking is sometimes considered a sub-discipline of telecommunications.

Contents
1 History
2 Categorizing
2.1 By scale
2.2 By functional relationship
2.3 By network topology
2.4 By specialized function
3 Protocol stacks
4 Suggested topics
4.1 Layers
4.2 Data transmission
4.2.1 Wired transmission
4.2.2 Wireless transmission
4.3 Other

History
Carrying instructions between calculation machines and early computers was done by human users. In September, 1940 George Stibitz used a teletype machine to send instructions for a problem set from his Model K at Dartmouth College in New Hampshire to his Complex Number Calculator in New York and received results back by the same means. Linking output systems like teletypes to computers was an interest at the Advanced Research Projects Agency ARPA when, in 1962, J.C.R. Licklider was hired and developed a working group he called the "Intergalactic Network", a precursor to the ARPANet. In 1964, researchers at Dartmouth developed a time sharing system for distributed users of large computer systems. The same year, at MIT, a research group supported by General Electric and Bell Labs used a computer (DEC's PDP-8) to route and manage telephone connections. In 1968 Paul Baran proposed a network system consisting of datagrams or packets that could be used in a packet switching network between computer systems. In 1969 the University of California at Los Angeles, SRI (in Stanford), University of California at Santa Barbara, and the University of Utah were connected as the beginning of the ARPANet network using 50 kbit/s circuits.
Networks, and the technologies needed to connect and communicate through and between them, continue to drive computer hardware, software, and peripherals industries. This expansion is mirrored by growth in the numbers and types of users of networks from researchers and businesses to families and individuals in everyday use.

Categorizing

By scale
Local area network (LAN)
HomePNA
Power line communication (HomePlug)
Metropolitan area network (MAN)
Wide area network (WAN)
Personal area network (PAN)

By functional relationship
Client-server
Peer-to-peer (Workgroup)

By network topology
Bus network
Star network
Ring network
Mesh network
Star-bus network

By specialized function
Storage area networks
Server farms
Process control networks
Value added network
SOHO network
Wireless community network
XML appliance

Protocol stacks
Computer networks may be implemented using a variety of protocol stack architectures, computer buses or combinations of media and protocol layers, incorporating one or more of:
ARCNET
AppleTalk
ATM
Bluetooth
DECnet
Ethernet
FDDI
Frame relay
HIPPI
IEEE 1394 aka FireWire, iLink
IEEE 802.11
IEEE-488
IP
IPX
Myrinet
QsNet
RS-232
SPX
System Network Architecture
Token Ring
TCP
TCP Tuning for discussion of improving performance of same
USB
UDP
X.25
For a list of more see Network protocols.
For standards see IEEE 802.

Suggested topics
Further reading for acquiring an in-depth understanding of computer networks include:
Communication theory

Layers
OSI model TCP/IP model
Application layer
Application layer
Presentation layer
Session layer
Transport layer
Network Access Layer
Network layer
Data-link layer
Physical layer

Data transmission

Wired transmission
Public switched telephone network
Modems and dialup
Dedicated lines – leased lines
Time-division multiplexing
Packet switching
Frame relay
PDH
Ethernet
RS-232
Optical fiber transmission
Synchronous optical networking
Fiber distributed data interface

Wireless transmission
Short range
Bluetooth
Medium range
IEEE 802.11
Long range
Satellite
MMDS
SMDS
Mobile phone data transmission (channel access methods)
CDMA
CDPD
GSM
TDMA
Paging networks
DataTAC
Mobitex
Motient

Other
Computer networking device
Network card
Naming schemes
Network monitoring

For more information on Computer Networking, please visit
Wikipedia.
Communication
From Wikipedia, the free encyclopedia

Communication is the process of exchanging information, usually via a common protocol. "Communication studies" is the academic discipline focused on communication forms, processes and meanings, including speech, interpersonal and organizational communication. "Mass communication" is a more specialized academic discipline focused on the institutions, practice and effects of journalism, broadcasting, advertising, public relations and related mediated communication directed at a large, undifferentiated or segmented audience.

Contents
1 Forms and components of human communication
2 Communication technology
3 Communication barriers
4 Other examples of communication
4.1 Artificial
4.2 Biological
5 References
6 See also
7 External resources
7.1 Books
7.2 Websites

Forms and components of human communication
Examples of human communication are the sharing of knowledge and experiences, the giving or receiving of orders and cooperation. Common forms of human communication include telepathy, body language, sign language, speaking, writing, gestures, and broadcasting. Communication can be interactive, transactive, intentional, or unintentional; it can also be verbal or nonverbal. Surprisingly, only 7 to 11% of all communication is verbal, the rest being non-verbal and its aspects. Communication varies considerably in form and style when considering scale. Internal communication, within oneself, is intrapersonal while communication between two individuals is interpersonal. Interpersonal communication in the form of conversation plays an important role in learning. At larger scales of communication both the system of communication and media of communication change. Small-group communication takes place in settings of between three and 12 individuals creating a different set of interactions than large groups such as organizational communication in settings like companies or communities. At the largest scales mass communication describes communication to huge numbers of individuals through mass media. Communication also has a time component, being either synchronous or asynchronous.
There are a number of theories of communication that attempt to explain human communication, and various theories relating to human communication draw upon different core philosophies.
For instance, some theories presuppose communication as a five-step process that entails a sender's creation (or encoding) of a message, and the message's transmission through a channel to another individual, organization or a group of people. This message is received and then interpreted. Finally this message is responded to, which completes the process of communication. This model of the communication process is based on a model of signal transmission known as the Shannon-Weaver model.
Yet another communication model can be seen in the work of Roman Jakobson. Six elements and their correlative functions comprise this particular model.

Communication technology
The root of communication by artificial means, i.e. not using biologically immediate means like vocalization (or speech when occurring between humans), is general believed to be the art of writing that most probably goes back to the more ancient arts of drawing and painting.

Nowadays, the use of technology to aid and enhance distance communications, telecommunications in short, is usually taken to represent communication technology in general.
Digital telecommunications using Digital Transmission Media
encoding and decoding like compression and encryption (as they relate to enhancing or specifying communications) for example the use of encryption to turn a one to many into a one to one communication.
telegraphy
computer networks
analog telecommunications
Telephony
radio
TV
In telecommunications, the first transatlantic two-way radio broadcast occurred on July 25, 1920.
As the technology evolved, communication protocol also had to evolve; for example, Thomas Edison had to discover that hello was the least ambiguous greeting by voice over a distance; previous greetings such as hail tended to be lost or garbled in the transmission.
As regards human communication these diverse fields can be divided into those which cultivate a thoughtful exchange between a small number of people (debate, talk radio, e-mail, personal letters) on the one hand; and those which disseminate broadly a simple message (Public relations, television, cinema).
Our indebtedness to the Ancient Romans in the field of communication does not end with the Latin root "communicare". They devised what might be described as the first real mail or postal system in order to centralize control of the empire from Rome. This allowed Rome to gather knowledge about events in its many widespread provinces.
As the Romans well knew, communication is as much about taking in towards the centre as it is about putting out towards the extremes.
In virtual management an important issue is computer-mediated communication.
The view people take toward communication is changing, as new technologies change the way they communicate and organize. In fact, it is the changing technology of communication that tends to make the most frequent and widespread changes in a society - take for example the rise of web cam chat and other network-based visual communications between distant parties. The latest trend in communication, decentralized personal networking, is termed smartmobbing.
The introduction of an important new communication technology creates a new civilization, according to a book titled "Five Epochs of Civilization" by William McGaughey (Thistlerose, 2000). Ideographic writing produced the first civilization; alphabetic writing, the second; printing, the third; electronic recording and broadcasting, the fourth; and computer communication, the fifth. These successive world civilizations are also associated with the institutional mix of society. World history is told accordingly.

Communication barriers
The following factors may be barriers in human communication:

Language itself 
Time lag 
Politics 
Physics 
(such as background noise)
Emotions 
Anxiety associated with communication is known as communication apprehension. Such anxiety tends to be influenced by one's self-concept. Besides apprehension, communication can be impaired via bypassing, indiscrimination, and polarization. Failing to share a common language is also a significant barrier in many parts of the world.

Other examples of communication

Artificial
jungle drums
smoke signals
semaphores (use of devices to increase the distance "hand" signals can be seen from by increasing the size of the movable object)
Gold-plated disk (sent on Voyager 1 into interstellar space)
Photography
Art (including Theatre Arts)

Biological
Written and spoken language
Hand signals and body language
Territorial marking (animals such as dogs - stay away from my territory)
Pheromones communicate (amongst other things) (e.g. "I'm ready to mate") - a well known example is moth traps, which contain pheromones to attract moths.

For more information on Communication, please visit
Wikipedia.