scholarly journals Compute Node Models: Large-scale Amenable Block-Level Simulation for Memory Hierarchies and Pipelines

2017 ◽  
Author(s):  
Nandakishore Santhi ◽  
Gopinath Chennupati
2013 ◽  
Vol 221 (3) ◽  
pp. 190-200 ◽  
Author(s):  
Jörg-Tobias Kuhn ◽  
Thomas Kiefer

Several techniques have been developed in recent years to generate optimal large-scale assessments (LSAs) of student achievement. These techniques often represent a blend of procedures from such diverse fields as experimental design, combinatorial optimization, particle physics, or neural networks. However, despite the theoretical advances in the field, there still exists a surprising scarcity of well-documented test designs in which all factors that have guided design decisions are explicitly and clearly communicated. This paper therefore has two goals. First, a brief summary of relevant key terms, as well as experimental designs and automated test assembly routines in LSA, is given. Second, conceptual and methodological steps in designing the assessment of the Austrian educational standards in mathematics are described in detail. The test design was generated using a two-step procedure, starting at the item block level and continuing at the item level. Initially, a partially balanced incomplete item block design was generated using simulated annealing, whereas in a second step, items were assigned to the item blocks using mixed-integer linear optimization in combination with a shadow-test approach.


1993 ◽  
Vol 17 (1-2) ◽  
pp. 107-114 ◽  
Author(s):  
J.S. Vitter ◽  
M.H. Nodine

2021 ◽  
Vol 13 (18) ◽  
pp. 3603
Author(s):  
Joaquín Salas ◽  
Pablo Vera ◽  
Marivel Zea-Ortiz ◽  
Elio-Atenogenes Villaseñor ◽  
Dagoberto Pulido ◽  
...  

One of the challenges in the fight against poverty is the precise localization and assessment of vulnerable communities’ sprawl. The characterization of vulnerability is traditionally accomplished using nationwide census exercises, a burdensome process that requires field visits by trained personnel. Unfortunately, most countrywide censuses exercises are conducted only sporadically, making it difficult to track the short-term effect of policies to reduce poverty. This paper introduces a definition of vulnerability following UN-Habitat criteria, assesses different CNN machine learning architectures, and establishes a mapping between satellite images and survey data. Starting with the information corresponding to the 2,178,508 residential blocks recorded in the 2010 Mexican census and multispectral Landsat-7 images, multiple CNN architectures are explored. The best performance is obtained with EfficientNet-B3 achieving an area under the ROC and Precision-Recall curves of 0.9421 and 0.9457, respectively. This article shows that publicly available information, in the form of census data and satellite images, along with standard CNN architectures, may be employed as a stepping stone for the countrywide characterization of vulnerability at the residential block level.


2020 ◽  
Vol 20 (3) ◽  
pp. 223-230
Author(s):  
Jan Mühlig ◽  
Michael Müller ◽  
Olaf Spinczyk ◽  
Jens Teubner

Abstract Emerging hardware platforms are characterized by large degrees of parallelism, complex memory hierarchies, and increasing hardware heterogeneity. Their theoretical peak data processing performance can only be unleashed if the different pieces of systems software collaborate much more closely and if their traditional dependencies and interfaces are redesigned. We have developed the key concepts and a prototype implementation of a novel system software stack named mxkernel. For MxKernel, efficient large scale data processing capabilities are a primary design goal. To achieve this, heterogeneity and parallelism become first-class citizens and deep memory hierarchies are considered from the very beginning. Instead of a classical “thread” model, mxkernel provides a simpler control flow abstraction: mxtasks model closed units of work, for which mxkernel will guarantee the required execution semantics, such exclusive access to a specific object in memory. They can be a very elegant abstraction also for heterogeneity and resource sharing. Furthermore, mxtasks are annotated with metadata, such as code variants (to support heterogeneity), memory access behavior (to improve cache efficiency and support memory hierarchies), or dependencies between mxtasks (to improve scheduling and avoid synchronization cost). With precisely the required metadata available, mxkernel can provide a lightweight, yet highly efficient form of resource management, even across applications, operating system, and database. Based on the mxkernel prototype we present preliminary results from this ambitious undertaking. We argue that threads are an ill-suited control flow abstraction for our modern computer architectures and that a task-based execution model is to be favored.


Author(s):  
Cunguang Zhang ◽  
Hongxun Jiang ◽  
Riwei Pan ◽  
Haiheng Cao ◽  
Mingliang Zhou

Sea-land segmentation based on edge detection is commonly utilized in ship detection, coastline extraction, and satellite system applications due to its high accuracy and rapid speed. Pixel-level distribution statistics do not currently satisfy the requirements for high-resolution, large-scale remote sensing image processing. To address the above problem, in this paper, we propose a high-throughput hardware architecture for sea-land segmentation based on multi-dimensional parallel characteristics. The proposed architecture is well suited to wide remote sensing images. Efficient multi-dimensional block level statistics allow for relatively infrequent pixel-level memory access; a boundary block tracking process replaces the whole-image scanning process, markedly enhancing efficiency. The tracking efficiency is further improved by a convenient two-step scanning strategy that feeds back the path state in a timely manner for a large number of blocks in the same direction appearing in the algorithm. The proposed architecture was deployed on Xilinx Virtex k7-410t to find that its practical processing time for a [Formula: see text] remote sensing image is only about 0.4[Formula: see text]s. The peak performance is 1.625[Formula: see text]gbps, which is higher than other FPGA implementations of segmentation algorithms. The proposed structure is highly competitive in processing wide remote sensing images.


Author(s):  
E. Bailey ◽  
R. Taylor ◽  
K. R. Croasdale

The mechanics of ice rubble plays an important role in many different engineering applications, including ice-structure interactions with oil and gas infrastructure, river and lake engineering, and ship-ice interactions in northern shipping lanes. Of particular interest are the massive accumulations of rubble formed by shear or compression in the ice cover, which consolidate to form sea ice ridges that can be hazards to such structures. These are common ice features in Arctic and sub-Arctic environments and as a result often govern the design loads for ships, coastal and offshore structures operating in these environments. In addition, ridge keels can scour the seafloor in relatively shallow waters posing a threat to pipelines and other subsea facilities. It is not clear what load an ice rubble feature can exert on a given structure and how it will deform. It will depend on a number of parameters including the age of the feature, its composition and structure, and its strength and failure behaviour. This paper will examine the mechanical properties of ice rubble over multiple scales. The paper will begin at the one block level, describing how ice block properties vary over time, before advancing to look at the bonding/sintering processes that occur between two ice blocks and eventually the processes that take place between multiple ice blocks (i.e., ice rubble) and large scale sea ice ridges. Particular attention will be paid to the effects temperature and pressure have on ice rubble, as these parameters are believed to be important to our understanding of its behavior.


1999 ◽  
Vol 173 ◽  
pp. 243-248
Author(s):  
D. Kubáček ◽  
A. Galád ◽  
A. Pravda

AbstractUnusual short-period comet 29P/Schwassmann-Wachmann 1 inspired many observers to explain its unpredictable outbursts. In this paper large scale structures and features from the inner part of the coma in time periods around outbursts are studied. CCD images were taken at Whipple Observatory, Mt. Hopkins, in 1989 and at Astronomical Observatory, Modra, from 1995 to 1998. Photographic plates of the comet were taken at Harvard College Observatory, Oak Ridge, from 1974 to 1982. The latter were digitized at first to apply the same techniques of image processing for optimizing the visibility of features in the coma during outbursts. Outbursts and coma structures show various shapes.


1994 ◽  
Vol 144 ◽  
pp. 29-33
Author(s):  
P. Ambrož

AbstractThe large-scale coronal structures observed during the sporadically visible solar eclipses were compared with the numerically extrapolated field-line structures of coronal magnetic field. A characteristic relationship between the observed structures of coronal plasma and the magnetic field line configurations was determined. The long-term evolution of large scale coronal structures inferred from photospheric magnetic observations in the course of 11- and 22-year solar cycles is described.Some known parameters, such as the source surface radius, or coronal rotation rate are discussed and actually interpreted. A relation between the large-scale photospheric magnetic field evolution and the coronal structure rearrangement is demonstrated.


Sign in / Sign up

Export Citation Format

Share Document