Human Validation of Computer vs Human Generated Design Sketches

Author(s):  
Christian Lopez ◽  
Scarlett R. Miller ◽  
Conrad S. Tucker

The objective of this work is to explore the perceived visual and functional characteristics of computer generated sketches, compared to human created sketches. In addition, this work explores the possible biases that humans may have towards the perceived functionality of computer generated sketches. Recent advancements in deep generative design methods have allowed designers to implement computational tools to automatically generate large pools of new design ideas. However, if computational tools are to co-create ideas and solutions alongside designers, their ability to generate not only novel but also functional ideas, needs to be explored. Moreover, since decision-makers need to select those creative ideas for further development to ensure innovation, their possible biases towards computer generated ideas need to be explored. In this study, 619 human participants were recruited to analyze the perceived visual and functional characteristics of 50 human created 2D sketches, and 50 2D sketches generated by a deep learning generative model (i.e., computer generated). The results indicate that participants perceived the computer generated sketches as more functional than the human generated sketches. This perceived functionality was not biased by the presence of labels that explicitly presented the sketches as either human or computer generated. Moreover, the results reveal that participants were not able to classify the 2D sketches as human or computer generated with accuracies greater than random chance. The results provide evidence that supports the capabilities of deep learning generative design tools and their potential to assist designers in creative tasks such as ideation.

2018 ◽  
Vol 141 (2) ◽  
Author(s):  
Christian E. Lopez ◽  
Scarlett R. Miller ◽  
Conrad S. Tucker

The objective of this work is to explore the possible biases that individuals may have toward the perceived functionality of machine generated designs, compared to human created designs. Toward this end, 1187 participants were recruited via Amazon mechanical Turk (AMT) to analyze the perceived functional characteristics of both human created two-dimensional (2D) sketches and sketches generated by a deep learning generative model. In addition, a computer simulation was used to test the capability of the sketched ideas to perform their intended function and explore the validity of participants' responses. The results reveal that both participants and computer simulation evaluations were in agreement, indicating that sketches generated via the deep generative design model were more likely to perform their intended function, compared to human created sketches used to train the model. The results also reveal that participants were subject to biases while evaluating the sketches, and their age and domain knowledge were positively correlated with their perceived functionality of sketches. The results provide evidence that supports the capabilities of deep learning generative design tools to generate functional ideas and their potential to assist designers in creative tasks such as ideation.


2020 ◽  
Vol 2 (3) ◽  
pp. 1007-1023 ◽  
Author(s):  
Ravi S. Hegde

We review recent progress in the application of Deep Learning (DL) techniques for photonic nanostructure design and provide a perspective on current limitations and fruitful directions for further development.


Author(s):  
Soyoung Yoo ◽  
Sunghee Lee ◽  
Seongsin Kim ◽  
Kwang Hyeon Hwang ◽  
Jong Ho Park ◽  
...  

2021 ◽  
Vol 20 ◽  
pp. 153303382110163
Author(s):  
Danju Huang ◽  
Han Bai ◽  
Li Wang ◽  
Yu Hou ◽  
Lan Li ◽  
...  

With the massive use of computers, the growth and explosion of data has greatly promoted the development of artificial intelligence (AI). The rise of deep learning (DL) algorithms, such as convolutional neural networks (CNN), has provided radiation oncologists with many promising tools that can simplify the complex radiotherapy process in the clinical work of radiation oncology, improve the accuracy and objectivity of diagnosis, and reduce the workload, thus enabling clinicians to spend more time on advanced decision-making tasks. As the development of DL gets closer to clinical practice, radiation oncologists will need to be more familiar with its principles to properly evaluate and use this powerful tool. In this paper, we explain the development and basic concepts of AI and discuss its application in radiation oncology based on different task categories of DL algorithms. This work clarifies the possibility of further development of DL in radiation oncology.


2021 ◽  
Vol 2042 (1) ◽  
pp. 012116
Author(s):  
Pierson Clotilde ◽  
Soto Magán Victoria Eugenia ◽  
Aarts Mariëlle ◽  
Andersen Marilyne

Abstract Recent developments in the lighting research field have demonstrated the importance of a proper exposure to light to mediate several of our behavioral and physiological responses. However, we spend nowadays around 90% of our time indoors with an often quite limited access to bright daylight. To be able to anticipate how much the built environment actually influences our light exposure, and how much it may ultimately impact our health, well-being, and productivity, new computational tools are needed. In this paper, we present a first attempt at a simulation workflow that integrates a spectral simulation tool with a light-driven prediction model of alertness. The goal is to optimize the effects of light on building occupants, by informing the decision makers about the impact of different design choices. The workflow is applied to a case study to provide an example of what learnings can be expected from it.


Author(s):  
V. I. Solovyov ◽  
O. V. Rybalskiy ◽  
V. V. Zhuravel ◽  
V. K. Zheleznyak

Possibility of creation of effective system, which is intended for exposure of tracks of editing in digital phonograms and is built on the basis of neuron networks of the deep learning, is experimentally proven. Sense of experiment consisted in research of ability of the systems on the basis of such networks to expose pauses with tracks of editing. The experimental array of data is created in a voice editor from phonograms written on the different apparatus of the digital audio recording (at frequency of discretisation 44,1 kHz). A preselection of pauses was produced from it, having duration from 100 мs to a few seconds. From 1000 selected pauses the array of fragments of pauses is formed in the automatic (computer) mode, from which the arrays of fragments of pauses of different duration are generated by a dimension about 100 000. For forming of array of fragments of pauses with editing, the chosen pauses were divided into casual character parts in arbitrary correlation. Afterwards, the new pauses were created from it with the fixed place of editing. The general array of all fragments of pauses was broken into training and test arrays. The maximum efficiency, achieved on a test array in the process of educating, was determined. In general case this efficiency is determined by the maximum size of probability of correct classification of fragments with editing and fragments without editing. Scientifically reasonable methodology of exposure of signs of editing in digital phonograms is offered on the basis of neuron networks of the deep learning. The conducted experiments showed that the construction of the effective system is possible for the exposure of such tracks. Further development of methodology must be directed to find the ways to increase the probability of correct binary classification of investigated pauses.


2021 ◽  
pp. 340-348
Author(s):  
А.А. Колобкова

Данная статья посвящена анализу первых учебных книг, заложивших основу для дальнейшего развития учебного книгоиздания по вопросам изучения французского языка в России. Проанализированы наиболее известные первые учебники по французскому языку: «Новая францусская грамматика…» В.Е. Теплова, «Французская азбука» А. де Лави, «Наставление как по-французски…» Я. Сигезбека, «Французская азбука» Академии наук, «Новый французский словарь» П.И. Богдановича и др. Автор приходит к выводу, что все учебники французского языка рассматриваемого периода выступают своеобразным «зеркалом», отражающим прогресс, имевший место в российской педагогической мысли. Важным наблюдением признается тот факт, что многие французские азбуки, целевой аудиторией которых выступали учащиеся гимназий, приобретали популярность среди изучавших французский язык самостоятельно. Иными словами, они переходили в разряд самоучителей, в значительной степени расширяя тем самым их функциональную характеристику. The article is devoted to the analysis of the first educational books that laid the foundation for the further development of educational publishing on the study of the French language in Russia. The most famous first textbooks on the French language are analyzed: «New French grammar ...» by V. Ye. Teplov, «French alphabet» by A. de Lavi, «Manual...» by J. Sigesbek, «French alphabet» by the Academy of Sciences, «New French Dictionary» by PI Bogdanovich, etc. The author comes to the conclusion that all French textbooks of the period under review act as a kind of ‘mirror’ which reflects the progress that took place in Russian pedagogical thought. An important observation is the fact that many French alphabets, aimed for students, gained popularity among those who studied French on their own. In other words, they passed into the category of self-instruction manuals, thereby significantly expanding their functional characteristics.


2021 ◽  
Author(s):  
Gary M. Stump ◽  
Michael Yukish ◽  
Jonathan Cagan ◽  
Christopher McComb

Abstract Human subject experiments are often used in research efforts to understand human behavior in design. However, such research is often time-consuming, expensive, and limited in scope due to the need to experimentally control specific variables. This work develops an initial digital simulation of team-based multidisciplinary design, where the actions of individual team members are simulated using deep learning models trained on historical human design trends. The main benefit of this work is to simulate design session events and interactions without human participants, developing a complimentary method to rapidly perform digital team-based experiments. This research merges the benefits of purely data-driven modeling with minimal assumptions about process, along with the strengths of agent-based modeling in which it is possible to tailor agent behavior. Initial results show that the simulated design team sessions are able to replicate trends and distributions compared to human-based team sessions, but run approximately 21 times faster than equivalent human subject studies. The multi-disciplinary design problem currently simulated is loosely coupled, in the sense that agent behaviors can be modeled in isolation of other agents and yet replicate the behavior of the ensemble. Future work will extend the agents to sense and respond behaviors that can be used to model tightly coupled problems, and truly evaluate team formulations.


Author(s):  
Lennie Scott-Webber

Too many stakeholders are ignoring too much scientific research and the net resulting outcome is too many students are left behind academically. Significant and strategic changes must occur quickly to correct this fundamental outcome. This chapter explores issues relative to the current state of classroom design and why they haven't changed systemically in over 4000 years. Definitions of active learning and behavioral research basics, the nature of the physical learning place, Evidence-Based Designs (EBD) solutions and examples of solution features and capabilities impacting pedagogy (i.e., teaching and learning strategies), technology and spaces are shared. Metrics of ‘proof' of engagement impact are cited, and this author argues that space provides behavioral cues. To simplify the complexity of moving from a teacher-centric paradigm and design solutions to a learner-centric one, two important items for consideration are presented: 1) a formula guiding deep learning parameters for all stakeholders and 2) a decision-makers' checklist.


2011 ◽  
pp. 149-160 ◽  
Author(s):  
N. Feltovich

Human-participants experiments using markets with asymmetric information typically exhibit a “winner’s curse,” wherein bidders systematically bid more than their optimal amount. The winner’s curse is very persistent; even when participants are able to make decisions repeatedly in the same situation, they repeatedly overbid. Why do people keep making the same mistakes over and over again? In this chapter, we consider a class of one-player decision problems, which generalize Akerlof’s (1970) market-for-lemons model. We show that if decision makers learn via reinforcement, specifically by the reference point model of Erev and Roth (1996), their behavior typically changes very slowly, and persistent mistakes are likely. We also develop testable predictions regarding when individuals ought to be able to learn more quickly.


Sign in / Sign up

Export Citation Format

Share Document