Vannevar Bush Faculty Fellowship (DOD)

2020 ◽  
Vol 44 (17) ◽  
pp. 5-5
Keyword(s):  
Technology ◽  
2008 ◽  
Vol 11 (2) ◽  
pp. 105-117
Author(s):  
Dennis K. McBride
Keyword(s):  

1984 ◽  
Vol 40 (3) ◽  
pp. 19-20
Author(s):  
Deborah Shapley
Keyword(s):  

Physics Today ◽  
1998 ◽  
Vol 51 (12) ◽  
pp. 49-50
Author(s):  
G. Pascal Zachary ◽  
Jessica Wang

Leonardo ◽  
1999 ◽  
Vol 32 (5) ◽  
pp. 353-358 ◽  
Author(s):  
Noah Wardrip-Fruin

We look to media as memory, and a place to memorialize, when we have lost. Hypermedia pioneers such as Ted Nelson and Vannevar Bush envisioned the ultimate media within the ultimate archive—with each element in continual flux, and with constant new addition. Dynamism without loss. Instead we have the Web, where “Not Found” is a daily message. Projects such as the Internet Archive and Afterlife dream of fixing this uncomfortable impermanence. Marketeers promise that agents (indentured information servants that may be the humans of About.com or the software of “Ask Jeeves”) will make the Web comfortable through filtering—hiding the impermanence and overwhelming profluence that the Web's dynamism produces. The Impermanence Agent—a programmatic, esthetic, and critical project created by the author, Brion Moss, a.c. chapman, and Duane Whitehurst— operates differently. It begins as a storytelling agent, telling stories of impermanence, stories of preservation, memorial stories. It monitors each user's Web browsing, and starts customizing its storytelling by weaving in images and texts that the user has pulled from the Web. In time, the original stories are lost. New stories, collaboratively created, have taken their place.


Author(s):  
Teresa Numerico

We can find the first anticipation of the World Wide Web hypertextual structure in Bush paper of 1945, where he described a “selection” and storage machine called the Memex, capable of keeping the useful information of a user and connecting it to other relevant material present in the machine or added by other users. We will argue that Vannevar Bush, who conceived this type of machine, did it because its involvement with analogical devices. During the 1930s, in fact, he invented and built the Differential Analyzer, a powerful analogue machine, used to calculate various relevant mathematical functions. The model of the Memex is not the digital one, because it relies on another form of data representation that emulates more the procedures of memory than the attitude of the logic used by the intellect. Memory seems to select and arrange information according to association strategies, i.e., using analogies and connections that are very often arbitrary, sometimes even chaotic and completely subjective. The organization of information and the knowledge creation process suggested by logic and symbolic formal representation of data is deeply different from the former one, though the logic approach is at the core of the birth of computer science (i.e., the Turing Machine and the Von Neumann Machine). We will discuss the issues raised by these two “visions” of information management and the influences of the philosophical tradition of the theory of knowledge on the hypertextual organization of content. We will also analyze all the consequences of these different attitudes with respect to information retrieval techniques in a hypertextual environment, as the web. Our position is that it necessary to take into accounts the nature and the dynamic social topology of the network when we choose information retrieval methods for the network; otherwise, we risk creating a misleading service for the end user of web search tools (i.e., search engines).


Author(s):  
Nicole Gingrich ◽  
Michael Hall ◽  
Isaac Patterson

In Science—The Endless Frontier, Vannevar Bush wrote that reaping the potential benefits of science conducted at federal laboratories requires the discoveries made in the laboratories be transferred to society. In federal laboratories, Offices of Research and Technology Applications (ORTAs) are tasked with transferring laboratory-developed technologies to the market, allowing society to reap the benefits provided by scientific investments. In fiscal year 2016, the Technology Partnerships Office of the National Institute of Standards and Technology (NIST) conducted a first-of-its-kind survey of the ORTAs of more than 50 federal laboratories to obtain information on their organization and operation. We present descriptive analyses of the responses to this survey in two topical areas: organizational characteristics and technology transfer characteristics. We disaggregated the data across the dimension of budget size to describe similarities and differences in responses across the budget categories. Among the relationships we observed, we found that ORTAs with larger technology transfer budgets report higher frequencies of conducting internal technology transfer activities, such as patent prosecution (e.g., drafting patents, filing patent applications, and responding to actions from the patent office) and market analysis. Additionally, we provide context to the data by summarizing the relevant research on ORTAs at universities, and we present potential inferences that may be drawn from that body of research and applied to the data on ORTAs at federal laboratories.


2021 ◽  
Author(s):  
Anindya Ghoshal ◽  
Michael J. Walock ◽  
Andy Nieto ◽  
Muthuvel Murugan ◽  
Clara Hofmeister-Mock ◽  
...  

Abstract Ultra high temperature ceramic (UHTC) materials have attracted attention for hypersonic applications. Currently there is significant interest in possible gas turbine engine applications of UHTC composites as well. However, many of these materials, such as hafnium carbide, zirconium carbide, and zirconium diboride, have significant oxidation resistance and toughness limitations. In addition, these materials are very difficult to manufacture because of their high melting points. In many cases, SiC powder is incorporated into UHTCs to aid in processing and to enhance fracture toughness. This can also improve the materials’ oxidation resistance at moderately high temperatures due to a crack-healing borosilicate phase. ZrB2-SiC composites show very good oxidation resistance up to 1700 °C, due to the formation of SiO2 and ZrO2 scales in numerous prior studies. While this may limit its application to hypersonic applications (due to reduced thermal conductivity and oxidation resistance at higher temperatures), these UHTC-SiC composites may find applications in turbomachinery, as either stand-alone parts or as a component in a multi-layer system. The US Army Research Laboratory (ARL), the Naval Postgraduate School (NPS), and the University of California – San Diego (UCSD) are developing tough UHTC composites with high durability and oxidation resistance. For this paper, UHTC-SiC composites and high-entropy fluorite oxides were developed using planetary and high-energy ball milling and consolidated using spark plasma sintering. These materials were evaluated for their oxidation-resistance, ablation-resistance, and thermal cycling behavior under a DoD/OSD-funded Laboratory University Collaborative Initiative (LUCI) Fellowship and DoD Vannevar Bush Fellowship Program. In the present paper experimental results and post-test material characterization of SPS sintered ZrB2, ZrB2+SiC, ZrB2+SiC+HfC, HfC+SiC, and HfC+ZrB2 pellets subjected to ablation test are presented.


Author(s):  
Stan Ruecker

Everyone who has browsed the Internet is familiar with the problems involved in finding what they want. From the novice to the most sophisticated user, the challenge is the same: how to identify quickly and reliably the precise Web sites or other documents they seek from within an ever-growing collection of several billion possibilities? This is not a new problem. Vannevar Bush, the successful Director of the Office of Scientific Research and Development, which included the Manhattan project, made a famous public call in The Atlantic Monthly in 1945 for the scientific community in peacetime to continue pursuing the style of fruitful collaboration they had experienced during the war (Bush, 1945). Bush advocated this approach to address the central difficulty posed by the proliferation of information beyond what could be managed by any single expert using contemporary methods of document management and retrieval. Bush’s vision is often cited as one of the early visions of the World Wide Web, with professional navigators trailblazing paths through the literature and leaving sets of linked documents behind them for others to follow. Sixty years later, we have the professional indexers behind Google, providing the rest of us with a magic window into the data. We can type a keyword or two, pause for reflection, then hit the “I’m feeling lucky” button and see what happens. Technically, even though it often runs in a browser, this task is “information retrieval.” One of its fundamental tenets is that the user cannot manage the data and needs to be guided and protected through the maze by a variety of information hierarchies, taxonomies, indexes, and keywords. Information retrieval is a complex research domain. The Association for Computing Machinery, arguably the largest professional organization for academic computing scientists, sponsors a periodic contest in information retrieval, where teams compete to see who has the most effective algorithms. The contest organizers choose or create a document collection, such as a set of a hundred thousand newspaper articles in English, and contestants demonstrate their software’s ability to find the most documents most accurately. Two of the measures are precision and recall: both of these are ratios, and they pull in opposite directions. Precision is the ratio of the number of documents that have been correctly identified out of the number of documents returned by the search. Recall is the ratio of the number of documents that have been retrieved out of the total number in the collection that should have been retrieved. It is therefore possible to get 100% on precision—just retrieve one document precisely on topic. However, the corresponding recall score would be a disaster. Similarly, an algorithm can score 100% on recall just by retrieving all the documents in the collection. Again, the related precision score would be abysmal. Fortunately, information retrieval is not the only technology available. For collections that only contain thousands of entries, there is no reason why people should not be allowed to simply browse the entire contents, rather than being limited to carrying out searches. Certainly, retrieval can be part of browsing—the two technologies are not mutually exclusive. However, by embedding retrieval within browsing the user gains a significant number of perceptual advantages and new opportunities for actions.


Sign in / Sign up

Export Citation Format

Share Document