By Robert Swider
By Dennis Kambury
“Pluralitas non est ponenda sine necessitas.” Occam’s Razor
Humanity and technology have enjoyed a symbiotic relationship for hundreds of thousands of years, from the time of Homo erectus, whom many anthropologists believe were the first hominids to control fire. The control of fire and the use of stone tools were turning points in our cultural evolution, though controlling fire was arguably a more significant leap forward. We now control technologies far more complex than fire and stone tools and the future will bring us technologies that make current technologies pale in comparison.
It is thought that human evolution has been directly tied to the use of these early technologies – stone tools gave us weapons and in conjunction with the use of fire allowed early man to significantly increase the amount of meat his diet, increasing his energy consumption, which in turn fueled evolutionary changes including larger brains, which begat even more sophisticated tools. Richard Wrangham of Harvard University argues that cooking of plant foods may have triggered brain expansion by allowing complex carbohydrates in starchy foods to become more digestible and in effect allow humans to absorb more calories – the modern brain requires roughly 20 percent of the total energy consumption of the human body. Further, the cooking of meat reduces the energy required to digest it, and opens up tightly woven carbohydrate molecules, permitting easier absorption.
Since the later part of the 20th century rapid technological changes have occurred which have transformed the human condition and society. Many of these changes result from the advancements in computers and communications, their availability and widespread utilization, including the advent and rapid expansion of the Internet. After the invention of the transistor at Bell Labs in 1948 and the invention of the integrated circuit in 1959, silicon and fiber optics became prime drivers for many technological changes; the future will be driven by robotics, genetics and nanotechnology.
The history of computer technology has involved a sequence of changes from one type of physical realization to another – from gears to relays to valves to transistors to integrated circuits and so on. Today’s advanced lithographic techniques can create chips with features only a fraction of micron wide. Soon they will yield even smaller parts and inevitably reach a point where logic gates are so small that they are made out of only a handful of atoms.
The rapid expansion of computer technology during the second half of the twentieth century and the early years of the twenty-first century have been driven in large part by the creation and evolution of integrated circuits, bandwidth and networking. Three laws are generally accepted as currently governing the spread of computer related technology – Moore’s Law, Gilder’s Law and Metcalf’s Law.
Moore’s Law: The “law” was adopted after Intel co-founder Gordon Moore wrote in a 1965 article that the number of transistors on a microchip would double every 24 months, which was later revised to doubling every 18 months. The result was cramming more components onto integrated circuits. Dr. Moore said that the next 40 years could be “mind-boggling” and that he wished he could be around to see it. “[T]hat prediction has today become shorthand for rapid technological change.”
Gilder’s Law: An assertion by George Gilder, visionary author of Telecosm states that “bandwidth grows at least three times faster than computer power.” This means that if computer power doubles every eighteen months (per Moore’s Law), then communications power doubles every six months.
Metcalf’s Law: Robert Metcalf’s law states that the “value” or “power” of a network increases in proportion to the square of the number of nodes on the network.
There are of course other laws and axioms related to the computer and software industries. There is Arthur C. Clarke’s Third Law “[a]ny sufficiently advanced technology is indistinguishable from magic.” Also in conjunction with this paper it may be important to keep in mind Sturgeon’s Revelation “[n]inety percent of everything is crud.”
The Changes to Society as the result of Technology Since the Formulation of Moore’s Law
We need to examine how the trends affected by these three laws represent have impacted our lives and society. The rapid progression of computer related technology may be best appreciated by considering some of the businesses and related terms that are now ubiquitous in the lexicon, but did not exist approximately a decade ago. Google Inc. opened its door in Menlo Park, California in September 1998. Now people commonly “google” someone or something as a way of acquiring background or other information. Googling is made possible by increased computer processing power and expanding bandwidth. Google® Map and Google® Earth also resulted from this growth process. You could also consider the evolution of eBay®, according to its web site “eBay® users worldwide trade more than $1,900 worth of goods on the site every second.” Wikipedia® is an online encyclopedia that started in 2001 and now as we all know often appears at the beginning of any search engine search result including those from Google®. It also often includes information of questionable accuracy; teachers often advise their students not to rely upon information contained therein. A “Wiki is a piece of server software that allows users to freely create and edit Web page content using any Web browser. Wiki supports hyperlinks and has a simple text syntax for creating new pages and crosslinks between internal pages on the fly.” WikiWikiWeb was the first wiki software ever written. It was developed in 1994 by Ward Cunningham in order to make the exchange of ideas between programmers easier. Mr. Cunningham lives in Oregon and is the chief technology officer for AboutUs.
Other technologies that resulted from the convergence of the expansion of those technologies represented by Moore’s Law, Gilder’s Law, and Metcalf’s Law include things now considered essential parts of our cu
lture, such as cell phones, GPS units, iPod®s and other MP3 players. Those innovations and many others occurred as a result of more powerful processors, expanding bandwidth and a world wide network of computers, which according to the Internet Systems Consortium included 541,677,360 host computers in January 2008 ; all of which fueled a dramatic societal paradigm shift.
Some contend that those changes are not the result of the “Laws” but of other factors. In his book, America Calling: A Social History of the Telephone to 1940 , Claude S. Fischer stated “[w]hen we talk about technology, we speculate about what the implications of technology are. It’s important not to fall into the trap of seeing technology as drivers of social change….” Fischer further opined “I wanted to show that we adapt technology to our own lives. People are much more resilient than they’re given credit for.”
Regardless of whether the expanding technology drives the changes or the desire for the products that can be realized from advanced technologies drives the technological expansion, the result is the same. It doesn’t matter which is the chicken and which is the egg – technologies continue to expand at an exponential rate.
The Future of Computing and the Internet
Although no one can predict with any certainty the future technological changes, it is possible to identify with reasonable certainty some areas where those changes will occur. Changes to hardware including the physical structure and composition of computers (as we know them), and to the devices that store our seemingly limitless quantities of information, are predictable and expected, as are changes to the means by which we communicate with those computers and one another. The hardware changes in the near term will most likely include the evolution and utilization of one or more of the following: Holographic Storage Technologies, Solid State Memory Storage Technologies, Molecular Computing, Quantum Computers, and Photonic Computers.
Holographic Storage involves the utilization of high storage densities and fast transfer rates, combined with durable, reliable, low cost media. According to the web site of InPhase Technologies, one of the current leaders in this field “[h]olography breaks through the density limits of conventional storage by going beyond recording only on the surface, to recording through the full depth of the medium.” Holography records and reads over a million bits of data with a single flash of light, while at present optical storage devices record one data bit at a time. The result is transfer rates significantly higher than current optical storage devices. “High storage densities and fast transfer rates, combined with durable, reliable, low cost media, make holography poised to become a compelling choice for next-generation storage and content distribution needs.”
Solid State Memory Storage Technologies (SSDs) are data storage device devices that use nonvolatile memory (Flash) and volatile memory (SDRAM) to store data. While technically these are not “disks,” these devices are referred to this way because they are typically used replacements for Hard Disk Drives. They have no moving parts. We are all familiar with the related technologies we presently know as thumb drives, memory sticks, compact flash and SD cards. One of the leading developers of SSDs is Micron Technologies that presently manufactures Crucial 2.5-inch Solid State Drives which are touted as having superior reliability, Increased power efficiency, Instant-load performance, Quiet acoustics, Reduced heat dissipation and are Lightweight.
Molecular Computing is computing that utilizes nanotechnology and DNA. “Molecular computation is a part of the larger body, `computational life science’. The goal of computational life science is to understand life systems from the perspective of theory of computation and to apply its research result to bioengineering.” The core advantage of molecular computing is the potential to pack vastly more circuitry onto a “microchip” than silicon will ever be capable of – and to do it cheaply.
Quantum Computers are computational devices based on quantum mechanics. The concept was first explored in the 1970′s and early 1980′s by physicists and computer scientists such as Charles H. Bennett of the IBM Thomas J. Watson Research Center, Paul A. Benioff of Argonne National Laboratory in Illinois, David Deutsch of the University of Oxford, and the late Richard P. Feynman of the California Institute of Technology (Caltech).
The idea emerged when scientists were pondering the fundamental limits of computation. They understood that if technology continued to abide by Moore’s Law, then the continually shrinking size of circuitry packed onto silicon chips would eventually reach a point where individual elements would be no larger than a few atoms. Here a problem arose because at the atomic scale the physical laws that govern the behavior and properties of the circuit are inherently quantum mechanical in nature, not classical. This then raised the question of whether a new kind of computer could be devised based on the principles of quantum physics.
An Optical Computer (also called a photonic computer) is a device that uses the photon in visible light or infrared beams, rather than electric current, to perform digital computations. An electric current flows at only about 10 percent of the speed of light. This limits the rate at which data can be exchanged over long distances, and is one of the factors that led to the evolution of optical fiber. By applying some of the advantages of visible and/or IR networks at the device and component scale, a computer might someday be developed that can perform operations 10 or more times faster than a conventional electronic computer.
Additional changes to hardware will include changes to the inputs and outputs. In the near term these changes may include Smart Pens, Better active matrix liquid crystal displays (AMLCD), Augmented Reality and Improved User Interfaces. The changes in the long term may include better vocal computer commands, retinal commands and perhaps someday neural links to our “computers”.
There are already several smart pens on the market including the Pulse Smart Pen from Livescribe and the FLY Fusion™ Pentop Computer. These devises usually utilize Digital Paper, also known as interactive paper, which is a patterned paper used in conjunction with a digital pen to create handwritten digital documents. The printed dot pattern uniquely identifies the position coordinates on the paper. The digital pen uses this pattern to store the handwriting and upload it to a computer.
Roentgen Display Technology is an active matrix liquid crystal display (AMLCD), optimized to produce razor-sharp images. IBM Research has developed a display screen that is so precise it’s as clear as the original paper document. The new display is the culmination of a research project code named Roentgen. http://www.research.ibm.com/roentgen/
Augmented Reality (AR) is a growing area in virtual rea
lity research and applications. The worlds used in virtual environments such as Second Life® demonstrate that computers are beginning to emulate the world’s environments. Some of these worlds are very simplistic such as the environments created for immersive entertainment and games such as Guitar Hero®, or the more expensive realistic system environments such as flight simulators and Second Life®. An augmented reality system generates a composite view for the user. It is a combination of the real scene viewed by the user and a virtual scene generated by the computer that augments the scene with additional information.
Several New User Interfaces are emerging that are now being utilized and will continue to expand in the near future. Many of these interfaces are touch based systems. These include Apple® multi touch systems used on iPhones ™ and iPhone™-like devices, which are becoming increasingly commonplace. Microsoft® Surface™ Computing also utilizes a multi touch system which provides a whole new way to interact with information that engages the senses. Microsoft® asserts that it is at the forefront of developing surface computing products that push computing boundaries, deliver new experiences that break down barriers between users and technology, and provide new opportunities for companies to engage with people.
The changes in technology and the need to continue to make chips and the other technological devices smaller will likely be driven by nanotechnology. “[N]anotechnology, the art of manipulating materials on an atomic or molecular scale to build microscopic devices such as robots, which in turn will assemble individual atoms and molecules into products much as if they were Lego blocks. Nanotechnology is about building things one atom at a time, about making extraordinary devices with ordinary matter.”
The Effects of Technology on Society in the Future
Specific instances of what the distant future will bring are unpredictable, but the emerging future trends are predictable and testable. They will include an expanding use of robotics, genetics, bioengineering and nanotechnology.
The Internet will also continue its evolution. Eventually we will all be able to be seamlessly connected all of the time and be able to communicate with anyone almost anywhere, except maybe parts of Tibet or caves in Afghanistan. We will continue to experience the explosion of cameras throughout the developed world with live links to the Internet.
These emerging trends will include iPhone™– like devices becoming ubiquitous along with other portable communications devices that can read the local environment on both a macro to micro level, be it a country to find a region, a region to find a town, a town to find a neighborhood, a neighborhood to find a restaurant and while you are in the restaurant eating lunch you can explore other locations, perhaps your next destination.
Phones are now able to track contacts globally and conversely, you are able to be tracked. Combine modern intelligent “phones” with Artificial Intelligence and predictive modeling, and reasonable assumptions will be able to be drawn as to destinations of anyone you are tracking as well as their current locations. As we all know video cameras are in many places long considered public, this trend will be extended and video cams will be accessible to visually track contacts, switching feeds as the contact moves from one zone to the next.
Our shopping experience will also be changed. In-store purchases using existing and new scan technologies perhaps even RFID chips in our charge cards or passports will allow the store to assist the customer as he/she browses in the store and provide advice about items that are compatible with recent purchasing behaviors or perhaps to assist customers in locating a gift for a friend or a spouse.
Automobiles, powered by something other than gasoline, will have increasingly have TiVo®-like capabilities. Now considered a luxury item, within a few years all vehicles will ship with onboard video technology that will monitor the vehicles surroundings 360° and record the most recent minute of activity. In event of an accident or incident, that section will be locked in memory and will be capable of being saved either on impact or manually and the saved material will encompass the prior minute, everything until the device is stopped manually, memory runs out, or after a preset time period. Incident triggers will notify relevant authorities, as OnStar® does currently. It is conceivable that insurance companies and/or state or federal regulations will require this of customers at some point.
Home and business installations of new technologies could open up a whole can of privacy worms. Is it fair to check on the baby sitter to make sure junior is OK? What if baby sitter and his/her special friend are having sex on your couch … is it legal to watch and/or record? Will there be a reasonable expectation of privacy when such capabilities are ubiquitous?
Social informatics is a body of research that examines the social aspects of computerization , perhaps answers to some of the questions posed by this article will be addressed by that discipline. You can already enter into a doctoral program to study the societal implications of technology at the University of California Santa Barbara.
Someday will we all cast our votes for President from a portable personal handheld electronic device, encoded with our personal information in a manner which allows only the real owner of the device to use it?
One thing is certain; there will be unexpected counterintuitive outcomes of new technologies. Moore’s Law, Gilder’s Law, and Metcalf’s Law as they fade into oblivion will give rise to other laws for us to use as a measure of the exponential technological changes. The evolution of Homo sapiens would not have been possible without technology. Humans are defined by technology and technology in some ways has defined humans. “[A] serious assessment of the history of technology shows that technological change is exponential.” Hence we will see things in our time that will surprise even the most jaded among us.
The challenge is to create the physical and societal infrastructure, including needed revisions to the legal and ethical environments, so future generations can benefit from these advanced technologies without suffering traumatic societal upheaval and the loss of our precious liberties. We appear to be beginning to embark upon that process now, and while there will always be surprises from the technologists, resistance from ideological fundamentalists and interference from bureaucracies and other interest groups, we will continue on this path of exponential growth as the future unfolds.
Copyright 2008 – Robert Swider and Dennis Kambury. All Rights reserved.
- Plurality should not be posited without necessity. Also often stated as “The explanation requiring the fewest assumptions is most likely to be correct.”
- Joy, Bill. Why the future doesn’t need us. Wired Magazine April 2000. (www.wired.com/wired/archive/8.04/joy_pr.html)
- These are not laws as lawyers have come to know and appreciate them but they do in many ways govern human behavior.
- Gordon E. Moore, Director, Research and Development Laboratories, Fairchild Semiconductor. Published in Electronics, Volume 38, Number 8, April 19, 1965.
- Gilder, George. TELECOSM: How Infinite Bandwidth will Revolutionize Our World, New York: The Free Press 2000.
- deVilla, Joey. Laws of Software Development. http://globalnerdy.com/2007/07/18/laws-of-software-development/.
- Private conversation on June 8, 2008 with Cassidy Swider now a college student at Seattle University.
- Fischer, Claude S., America Calling: A Social History of the Telephone to 1940 (Berkeley: University of California Press 1992). America Calling won the 1995 Dexter Prize of the Society for the History of Technology for deepening our understanding of one of our century’s key technologies, while at the same time providing fresh insights into the process of technological change in general.
- http://www.inphase- tech.com/technology/default.asp?subn=2_1
- Hagiya, Masami. Theory and Construction of Molecular Computers http://hagi.is.s.u-tokyo.ac.jp/MCP/
- The Quantum Computer An Introduction by Jacob West April 28, 2000 http://www.cs.caltech.edu/~westside/quantum-intro.html
- Microsoft Surface http://www.surface.com
- Kurzweil, Ray. The Law of Accelerating Returns. http://www.kurzweilai.net/articles/art0134.html (An excellent article on the nature of exponential growth and the evolution of technology. In this article Kurzweil predicts the Singularity a “technological change so rapid and so profound that it represents a rupture in the fabric of human history. … (which) will transform all aspects of our lives, social, sexual, and economic ….”)