Using Deep Learning to Simulate Multi-Disciplinary Design Teams

2021 ◽  
Author(s):  
Gary M. Stump ◽  
Michael Yukish ◽  
Jonathan Cagan ◽  
Christopher McComb

Abstract Human subject experiments are often used in research efforts to understand human behavior in design. However, such research is often time-consuming, expensive, and limited in scope due to the need to experimentally control specific variables. This work develops an initial digital simulation of team-based multidisciplinary design, where the actions of individual team members are simulated using deep learning models trained on historical human design trends. The main benefit of this work is to simulate design session events and interactions without human participants, developing a complimentary method to rapidly perform digital team-based experiments. This research merges the benefits of purely data-driven modeling with minimal assumptions about process, along with the strengths of agent-based modeling in which it is possible to tailor agent behavior. Initial results show that the simulated design team sessions are able to replicate trends and distributions compared to human-based team sessions, but run approximately 21 times faster than equivalent human subject studies. The multi-disciplinary design problem currently simulated is loosely coupled, in the sense that agent behaviors can be modeled in isolation of other agents and yet replicate the behavior of the ensemble. Future work will extend the agents to sense and respond behaviors that can be used to model tightly coupled problems, and truly evaluate team formulations.

2021 ◽  
Vol 54 (3) ◽  
pp. 1-33
Author(s):  
Blesson Varghese ◽  
Nan Wang ◽  
David Bermbach ◽  
Cheol-Ho Hong ◽  
Eyal De Lara ◽  
...  

Edge computing is the next Internet frontier that will leverage computing resources located near users, sensors, and data stores to provide more responsive services. Therefore, it is envisioned that a large-scale, geographically dispersed, and resource-rich distributed system will emerge and play a key role in the future Internet. However, given the loosely coupled nature of such complex systems, their operational conditions are expected to change significantly over time. In this context, the performance characteristics of such systems will need to be captured rapidly, which is referred to as performance benchmarking, for application deployment, resource orchestration, and adaptive decision-making. Edge performance benchmarking is a nascent research avenue that has started gaining momentum over the past five years. This article first reviews articles published over the past three decades to trace the history of performance benchmarking from tightly coupled to loosely coupled systems. It then systematically classifies previous research to identify the system under test, techniques analyzed, and benchmark runtime in edge performance benchmarking.


2021 ◽  
Vol 13 (4) ◽  
pp. 744
Author(s):  
J. Xavier Prochaska ◽  
Peter C. Cornillon ◽  
David M. Reiman

We performed an out-of-distribution (OOD) analysis of ∼12,000,000 semi-independent 128 × 128 pixel2 sea surface temperature (SST) regions, which we define as cutouts, from all nighttime granules in the MODIS R2019 Level-2 public dataset to discover the most complex or extreme phenomena at the ocean’s surface. Our algorithm (ULMO) is a probabilistic autoencoder (PAE), which combines two deep learning modules: (1) an autoencoder, trained on ∼150,000 random cutouts from 2010, to represent any input cutout with a 512-dimensional latent vector akin to a (non-linear) Empirical Orthogonal Function (EOF) analysis; and (2) a normalizing flow, which maps the autoencoder’s latent space distribution onto an isotropic Gaussian manifold. From the latter, we calculated a log-likelihood (LL) value for each cutout and defined outlier cutouts to be those in the lowest 0.1% of the distribution. These exhibit large gradients and patterns characteristic of a highly dynamic ocean surface, and many are located within larger complexes whose unique dynamics warrant future analysis. Without guidance, ULMO consistently locates the outliers where the major western boundary currents separate from the continental margin. Prompted by these results, we began the process of exploring the fundamental patterns learned by ULMO thereby identifying several compelling examples. Future work may find that algorithms such as ULMO hold significant potential/promise to learn and derive other, not-yet-identified behaviors in the ocean from the many archives of satellite-derived SST fields. We see no impediment to applying them to other large remote-sensing datasets for ocean science (e.g., SSH and ocean color).


Author(s):  
HYEON SOO KIM ◽  
YONG RAE KWON ◽  
IN SANG CHUNG

Software restructuring is recognized as a promising method to improve logical structure and understandability of a software system which is composed of modules with loosely-coupled elements. In this paper, we present methods of restructuring an ill-structured module at the software maintenance phase. The methods identify modules performing multiple functions and restructure such modules. For identifying the multi-function modules, the notion of the tightly-coupled module that performs a single specific function is formalized. This method utilizes information on data and control dependence, and applies program slicing to carry out the task of extracting the tightly-coupled modules from the multi-function module. The identified multi-function module is restructured into a number of functional strength modules or an informational strength module. The module strength is used as a criterion to decide how to restructure. The proposed methods can be readily automated and incorporated in a software tool.


2021 ◽  
Vol 23 (Supplement_6) ◽  
pp. vi202-vi203
Author(s):  
Alvaro Sandino ◽  
Ruchika Verma ◽  
Yijiang Chen ◽  
David Becerra ◽  
Eduardo Romero ◽  
...  

Abstract PURPOSE Glioblastoma is a highly heterogeneous brain tumor. Primary treatment for glioblastoma involves maximally-safe surgical resection. After surgery, resected tissue slides are visually analyzed by neuro-pathologists to identify distinct histological hallmarks characterizing glioblastoma including high cellularity, necrosis, and vascular proliferation. In this work, we present a hierarchical deep learning-based strategy to automatically segment distinct Glioblastoma niches including necrosis, cellular tumor, and hyperplastic blood vessels, on digitized histopathology slides. METHODS We employed the IvyGap cohort for which Hematoxylin and eosin (H&E) slides (digitized at 20X magnification) from n=41 glioblastoma patients were available. Additionally, expert-driven segmentations of cellular tumor, necrosis, and hyperplastic blood vessels (along with other histological attributes) were made available. We randomly employed n=120 slides from 29 patients for training, n=38 slides from 6 cases for validation, and n=30 slides from 6 patients to feed our deep learning model based on Residual Network architecture (ResNet-50). ~2,000 patches of 224x224 pixels were sampled for every slide. Our hierarchical model included first segmenting necrosis from non-necrotic (i.e. cellular tumor) regions, and then from the regions segmented as non-necrotic, identifying hyperplastic blood-vessels from the rest of the cellular tumor. RESULTS Our model achieved a training accuracy of 94%, and a testing accuracy of 88% with an area under the curve (AUC) of 92% in distinguishing necrosis from non-necrotic (i.e. cellular tumor) regions. Similarly, we obtained a training accuracy of 78%, and a testing accuracy of 87% (with an AUC of 94%) in identifying hyperplastic blood vessels from the rest of the cellular tumor. CONCLUSION We developed a reliable hierarchical segmentation model for automatic segmentation of necrotic, cellular tumor, and hyperplastic blood vessels on digitized H&E-stained Glioblastoma tissue images. Future work will involve extension of our model for segmentation of pseudopalisading patterns and microvascular proliferation.


2017 ◽  
Vol 17 (1) ◽  
Author(s):  
Romans Pancs

AbstractSome industries have consumers who seek novelty and firms that innovate vigorously and whose organizational structure is loosely coupled, or easily adaptable. Other industries have consumers who take comfort in the traditional and firms that innovate little and whose organizational structure is tightly coupled, or not easily adaptable. This paper proposes a model that explains why the described features tend to covary across industries. The model highlights the pervasiveness of equilibrium inefficiency (innovation can be insufficient or excessive) and the nonmonotonicity of welfare in the equilibrium amount of innovation.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Aolin Che ◽  
Yalin Liu ◽  
Hong Xiao ◽  
Hao Wang ◽  
Ke Zhang ◽  
...  

In the past decades, due to the low design cost and easy maintenance, text-based CAPTCHAs have been extensively used in constructing security mechanisms for user authentications. With the recent advances in machine/deep learning in recognizing CAPTCHA images, growing attack methods are presented to break text-based CAPTCHAs. These machine learning/deep learning-based attacks often rely on training models on massive volumes of training data. The poorly constructed CAPTCHA data also leads to low accuracy of attacks. To investigate this issue, we propose a simple, generic, and effective preprocessing approach to filter and enhance the original CAPTCHA data set so as to improve the accuracy of the previous attack methods. In particular, the proposed preprocessing approach consists of a data selector and a data augmentor. The data selector can automatically filter out a training data set with training significance. Meanwhile, the data augmentor uses four different image noises to generate different CAPTCHA images. The well-constructed CAPTCHA data set can better train deep learning models to further improve the accuracy rate. Extensive experiments demonstrate that the accuracy rates of five commonly used attack methods after combining our preprocessing approach are 2.62% to 8.31% higher than those without preprocessing approach. Moreover, we also discuss potential research directions for future work.


2020 ◽  
Vol 9 (3) ◽  
pp. 1137-1148
Author(s):  
Jafar Majidpour ◽  
Hiwa Hasanzadeh

Application of deep learning to enhance the accuracy of intrusion detection in modern computer networks were studied in this paper. The identification of attacks in computer networks is divided in to two categories of intrusion detection and anomaly detection in terms of the information used in the learning phase. Intrusion detection uses both routine traffic and attack traffic. Abnormal detection methods attempt to model the normal behavior of the system, and any incident that violates this model is considered to be a suspicious behavior. For example, if the web server, which is usually passive, tries to There are many addresses that are likely to be infected with the worm. The abnormal diagnostic methods are Statistical models, Secure system approach, Review protocol, Check files, Create White list, Neural Networks, Genetic Algorithm, Vector Machines, decision tree. Our results have demonstrated that our approach offers high levels of accuracy, precision and recall together with reduced training time. In our future work, the first avenue of exploration for improvement will be to assess and extend the capability of our model to handle zero-day attacks.


Author(s):  
Carlos Granell ◽  
Laura Díaz ◽  
Michael Gould

The development of geographic information systems (GISs) has been highly influenced by the overall progress of information technology (IT). These systems evolved from monolithic systems to become personal desktop GISs, with all or most data held locally, and then evolved to the Internet GIS paradigm in the form of Web services (Peng & Tsou, 2001). The highly distributed Web services model is such that geospatial data are loosely coupled with the underlying systems used to create and handle them, and geospatial processing functionalities are made available as remote, interoperable, discoverable geospatial services. In recent years the software industry has moved from tightly coupled application architectures such as CORBA (Common Object Request Broker Architecture?Vinoski, 1997) toward service-oriented architectures (SOAs) based on a network of interoperable, well-described services accessible via Web protocols. This has led to de facto standards for delivery of services such as Web Service Description Language (WSDL) to describe the functionality of a service, Simple Object Access Protocol (SOAP) to encapsulate Web service messages, and Universal Description, Discovery, and Integration (UDDI) to register and provide access to service offerings. Adoption of this Web services technology as an option to monolithic GISs is an emerging trend to provide distributed geospatial access, visualization, and processing. The GIS approach to SOA-based applications is perhaps best represented by the spatial data infrastructure (SDI) paradigm, in which standardized interfaces are the key to allowing geographic services to communicate with each other in an interoperable manner. This article focuses on standard interfaces and also on current implementations of geospatial data processing over the Web, commonly used in SDI environments. We also mention several challenges yet to be met, such as those concerned with semantics, discovery, and chaining of geospatial processing services and also with the extension of geospatial processing capabilities to the SOA world.


2020 ◽  
Vol 20 (4) ◽  
pp. 609-624
Author(s):  
Mohamed Marzouk ◽  
Mohamed Zaher

Purpose This paper aims to apply a methodology that is capable to classify and localize mechanical, electrical and plumbing (MEP) elements to assist facility managers. Furthermore, it assists in decreasing the technical complexity and sophistication of different systems to the facility management (FM) team. Design/methodology/approach This research exploits artificial intelligence (AI) in FM operations through proposing a new system that uses a deep learning pre-trained model for transfer learning. The model can identify new MEP elements through image classification with a deep convolutional neural network using a support vector machine (SVM) technique under supervised learning. Also, an expert system is developed and integrated with an Android application to the proposed system to identify the required maintenance for the identified elements. FM team can reach the identified assets with bluetooth tracker devices to perform the required maintenance. Findings The proposed system aids facility managers in their tasks and decreases the maintenance costs of facilities by maintaining, upgrading, operating assets cost-effectively using the proposed system. Research limitations/implications The paper considers three fire protection systems for proactive maintenance, where other structural or architectural systems can also significantly affect the level of service and cost expensive repairs and maintenance. Also, the proposed system relies on different platforms that required to be consolidated for facility technicians and managers end-users. Therefore, the authors will consider these limitations and expand the study as a case study in future work. Originality/value This paper assists in a proactive manner to decrease the lack of knowledge of the required maintenance to MEP elements that leads to a lower life cycle cost. These MEP elements have a big share in the operation and maintenance costs of building facilities.


Author(s):  
Kwan-Ming Wan ◽  
Pouwan Lei ◽  
Chris Chatwin ◽  
Rupert Young

The established global business environment is under intense pressure from Asian countries such as Korea, China, and India. This forces businesses to concentrate on their core competencies and adopt leaner management structures. The coordination of activities both within companies and with suppliers and customers has become a crucial competitive advantage. At the same time, the Internet has transformed the way in which businesses run. As the Internet becomes a cheap and effective communication channel, businesses are quick to adopt the Web for integrating their systems together and linking them with their suppliers and customers. Current enterprise computing using J2EE (Java 2 Platform, Enterprise Edition) has yielded systems in which the coupling between various components in them are too tight to be effective for ubiquitous B2B (business-to-business) and B2C (business-to-consumer) e-business over the Internet. This approach requires too much agreement and shared context between business systems from different organizations. There is a need to move away from tightly coupled, monolithic systems and toward systems of loosely coupled, dynamically bound components. The emerging technology, Web services, provides the tools to accomplish this integration, but this approach presents many new challenges and problems that must be overcome. In this article, we will discuss the current approaches in enterprise application integration (EAI) and the limitations. There is also a need for service-oriented applications, that is, Web services. Finally, the challenges in implementing Web services are outlined.


Sign in / Sign up

Export Citation Format

Share Document