Thomas Haigh, Mark Priestley, Crispin Rope, ENIAC in Action: Making and Remaking the Modern Computer , (History of Computing) Cambridge/London: MIT Press 2016. 341 S., $ 38,00. ISBN 978‐0‐2620‐3398‐5.

2017 ◽  
Vol 40 (1) ◽  
pp. 98-100
Author(s):  
Sebastian Vehlken
Author(s):  
David Marsh ◽  
Song Shen ◽  
Gregory O’Hare ◽  
Michael O’Grady

Throughout the history of computing, there has been a trend for the ratio of processing elements to people to increase, resulting in the creation and popularization of new usage paradigms. At the start of the modern computer age, many individual users shared a single mainframe in one central location. In the early 1980s, however, significant developments in microprocessor technologies ushered in the desktop era, resulting in a one-to-one correspondence between individual users and their computers. Computer resources were now intrinsically distributed. The growth of the internet allowed these resources to connect to each other. The pervasive computing paradigm is the next logical stage in this trend, resulting in the original computer-human ratio reversing, so that multiple computational devices are available to each individual user. In reality, this point was passed a number of years ago. Mobile phones, personal digital assistants (PDAs), portable music players, as well as numerous embedded devices that people now take for granted, has resulted in computing technologies being embedded into the fabric of everyday life. Thus, for the first time, the desire of computing resources being available on an anywhere, anytime basis is a realistic objective. In addition to computing being available everywhere, pervasive computing has a second key element. This tenet states that user interaction with these universal computing elements should occur in as natural and intuitive a manner as possible. Thus, pervasive computing technology should be assimilated transparently into the user’s natural environment. Rather than deal with the entirety of this broad topic, the focus of this article is to provide an overview of the key developments on one particular technology which is essential to the realization of the pervasive computing vision: the wireless sensor network.


Author(s):  
Subrata Dasgupta

In Chapter 2, I suggested that Babbage’s place in the history of computing was twofold: first, because his Analytical Engine represented, for the first time, the idea of automatic universal computing and how this idea might be implemented, and second, because some of his design ideas—the store, mill, control, user interface via punched cards—anticipated some fundamental principles of the electronic universal computer that would be created some 75 years after his death. There is a modernity to his idea that makes us pause. Indeed, it led Babbage scholar Allan Bromley to admit that he was “bothered” by the architectural similarity of the Analytical Engine to the modern computer, and he wondered whether there is an inevitability to this architecture: Is this the only way a computer could be organized internally? Thus, Babbage’s creativity lay not only in conceiving a machine that had no antecedent, but also it lay in his envisioning an idea of universal computing that disappeared and then reappeared many decades later, and came to be the dominant architectural principle in computing. This observation is, of course, present-centered; we might be perilously close to what Herbert Butterfield had called the “Whig interpretation of history” (see Prologue, section VII ), for we seem to be extolling Babbage’s achievement because of its resonance with the achievements of our own time. But were there any direct consequences of his idea? What happened after Babbage? Did he have any influence on those who came after? And, if not, what took place in the development of what we have come to call computer science? In fact, there is a view that between Babbage’s mechanical world of computing and the electronic age, nothing really happened—that the time in between represented the Dark Ages in the history of computing. This is, of course, as misguided a view as another held by historians at one time that Europe, between the end of the Roman Empire (circa fifth century) and the Renaissance (the 15th–16th centuries)—the Middle Ages—was in a state of intellectual and creative backwardness.


Colossus ◽  
2006 ◽  
Author(s):  
Jack Copeland

Secrecy about Colossus has bedevilled the history of computing. In the years following the Second World War, the Hungarian-born American logician and mathematician John von Neumann, through writings and charismatic public addresses, made the concept of the electronic digital computer widely known. Von Neumann knew nothing of Colossus, and he told the world that the American ENIAC—first operational at the end of 1945, two years after Colossus—was ‘the first electronic computing machine’. Others familiar with the ENIAC and unaware of Colossus peddled the same message. The myth soon became set in stone, and for the rest of the twentieth century book after book—not to mention magazines and newspaper articles—told readers that the ENIAC was the first electronic computer. In 1971, a leading computer science textbook gave this historical summary: ‘The early story has often been told, starting with Babbage and . . . up to the birth of electronic machines with ENIAC.’ The present chapter revisits the early story, setting Colossus in its proper place. In the original sense of the word, a computer was not a machine at all, but a human being—a mathematical assistant whose task was to calculate by rote, in accordance with a systematic method supplied by an overseer prior to the calculation. The computer, like a filing clerk, might have little detailed knowledge of the end to which his or her work was directed. Many thousands of human computers were employed in business, government, and research establishments, doing some of the sorts of calculating work that nowadays is performed by electronic computers (see photograph 42). The term ‘computing machine’ was used increasingly from the 1920s to refer to small calculating machines which mechanised elements of the human computer’s work. For a complex calculation, several dozen human computers might be required, each equipped with a desktop computing machine. By the 1940s, however, the scale of some calculations required by physicists and engineers had become so great that the work could not easily be done in a reasonable time by even a roomful of human computers with desktop computing machines. The need to develop high-speed large-scale computing machinery was pressing.


2006 ◽  
Vol 38 (1) ◽  
pp. 67-71 ◽  
Author(s):  
Thomas J. Cortina ◽  
Richard McKenna

Sign in / Sign up

Export Citation Format

Share Document