memory cycle time
Recently Published Documents


TOTAL DOCUMENTS

4
(FIVE YEARS 0)

H-INDEX

2
(FIVE YEARS 0)

2011 ◽  
pp. 2605-2622
Author(s):  
Jill Owen

Knowledge reuse has long been an issue for organisations. The management, reuse and transfer of knowledge can improve project management capabilities (i.e., learning, memory, cycle time) resulting in continuous learning. Although knowledge management has been recognised as a critical success factor in programme management very little research has been conducted to date (Lycett, Rassau, & Danson, 2004; Soderlund, 2004). A framework is discussed that demonstrates how knowledge is created, transferred, captured and reused within project and programme management, resulting in improved project management maturity. The framework utilises a task based approach to knowledge management and assumes that knowledge is created, transferred and reused as a result of an individual performing a specific task, which in this context is a project at the project level and a programme at the programme level.


Author(s):  
Jill Owen

Knowledge reuse has long been an issue for organizations. The management, reuse and transfer of knowledge can improve project management capabilities (i.e. learning, memory, cycle time) resulting in continuous learning. Although knowledge management has been recognized as a critical success factor in program management very little research has been conducted to date (Lycett, Rassau, and Danson 2004, Soderlund 2004). A framework is discussed that demonstrates how knowledge is created, transferred, captured and reused within project and program management, resulting in improved project management maturity. The framework utilises a task based approach to knowledge management and assumes that knowledge is created, transferred and reused as a result of an individual performing a specific task, which in this context is a project at the project level and a program at the program level.


1991 ◽  
Vol 02 (03) ◽  
pp. 805-816 ◽  
Author(s):  
V.B. ANDREICHENKO ◽  
VL.S. DOTSENKO ◽  
L.N. SHCHUR ◽  
A.L. TALAPOV

We have designed and built a special purpose processor with a very good performance to price ratio, which permits to propose a new way for parallel computing. A simple one spin flip Monte Carlo algorithm is realized in hardware, so the processor is suitable for studies of dynamic as well as thermodynamic properties of the two-dimensional Ising model with different types of inhomogeneities. The speed of the processor is defined completely by the speed of memories used in it: to perform an elementary Monte Carlo step the processor needs a time only several percent larger than one memory cycle time. So it realizes the fastest possible one spin flip Monte Carlo processor architecture.


1981 ◽  
Vol 21 (04) ◽  
pp. 447-453 ◽  
Author(s):  
James S. Nolen ◽  
D.W. Kuba ◽  
M.J. Kascic

Abstract As computer technology approaches limitations imposed by the speed of light, increased emphasis is placed on vector processors. These have the ability to increase greatly the speed of arithmetic even without improvements in such basic computer characteristics as memory cycle time. This paper deals with solving systems of finite difference equations on the STAR 100 and the CYBER 203, two Control Data Corp. computers with built-in vector processors. Systems of three-dimensional finite difference equations having from 2,000 to 8,000 unknowns were solved by means of Gaussian elimination and line successive overrelaxation (LSOR). On these machines, the D4 Gaussian elimination technique reduced computer time by factors as large as 4.6 relative to standard Gaussian elimination. Vectorization of the D4 code on the STAR 100 reduced computer times relative to scalar results by factors as large as 26, despite nonoptimal coding. LSOR was vectorized successfully with computer time reduction factors of 35 to 43 on the STAR 100. On. the CYBER 203, run times were reduced by factors of 45 to 54, relative to the scalar performance of the STAR 100. On an 8,000-block problem, average processing speed for a complete LSOR solution was approximately 25 million floating operations per second (megaflops). Introduction Large computers with hardware specifically designed for vector processing offer the potential for solving large systems of finite difference equations with exceptional speed. Our work was intended to test certain solution algorithms and determine which perform best on two such computers-the STAR 100 and the CYBER 203. The algorithms discussed are both well known - (1) Gaussian elimination and (2) successive overrelaxation (SOR). The STAR 100 has as much as 1,024,000 words of 64-bit core memory and has a virtual operating system. Its most unusual feature, however, is that processing speed can vary over two orders of magnitude, depending on the structure of the computer code being processed. The speed of 64-bit arithmetic ranges from about 0.5 to 50 megaflops. (A floating operation is an add, multiply, divide, etc.) At the low end of the speed range, its performance is similar to a CDC 6600, a 1960's technology computer, but at the high end it can outrun the fastest of modern scalar computers. This large speed variation results from the fact that the STAR l00's core memory has a destructive read characteristic that prevents the same core area from being referenced for 31 machine cycles following a previous read. (This results in a memory cycle time of 1,280 nanoseconds.) Coupled with this slow core memory is a vector arithmetic unit that can produce two 64-bit adds or one 64-bit multiply during every 40-nanosecond clock cycle, once the arithmetic unit reaches steady state (see Appendix for details). All vector operations (adds, multiplies, etc.) have a linear performance characteristic of the formC=S+R·L, (1) where C is the number of clock cycles required to complete the operation, S is vector start-up time, R is the steady-state result rate, and L is vector length.


Sign in / Sign up

Export Citation Format

Share Document