Engineering Slidable Graphical User Interfaces with Slime

2021 ◽  
Vol 5 (EICS) ◽  
pp. 1-29
Author(s):  
Arthur Sluÿters ◽  
Jean Vanderdonckt ◽  
Radu-Daniel Vatavu

Intra-platform plasticity regularly assumes that the display of a computing platform remains fixed and rigid during interactions with the platform in contrast to reconfigurable displays, which can change form depending on the context of use. In this paper, we present a model-based approach for designing and deploying graphical user interfaces that support intra-platform plasticity for reconfigurable displays. We instantiate the model for E3Screen, a new device that expands a conventional laptop with two slidable, rotatable, and foldable lateral displays, enabling slidable user interfaces. Based on a UML class diagram as a domain model and a SCRUD list as a task model, we define an abstract user interface as interaction units with a corresponding master-detail design pattern. We then map the abstract user interface to a concrete user interface by applying rules for the reconfiguration, concrete interaction, unit allocation, and widget selection and implement it in JavaScript. In a first experiment, we determine display configurations most preferred by users, which we organize in the form of a state-transition diagram. In a second experiment, we address reconfiguration rules and widget selection rules. A third experiment provides insights into the impact of the lateral displays on a visual search task.

2016 ◽  
Vol 10 (2) ◽  
pp. 128-147
Author(s):  
Pavel Koukal

In this paper the author addresses the issue of collective administration of graphical user interfaces according to the impact of the CJEU decision in BSA v. Ministry of Culture on the case-law in one of EU Member states (Czech Republic). The author analyses the decision of the Czech Supreme Court where this Court concluded that visitors of Internet cafés use graphical user interface actively, which represents relevant usage of a copyrighted works within the meaning of Art. 18 the Czech Copyright Act. In this paper, attention is first paid to the definition of graphical user interface, its brief history and possible regimes of intellectual property protection. Subsequently, the author focuses on copyright protection of graphical user interfaces in the Czech law and interprets the BSA decision from the perspective of collective administration of copyright. Although the graphical user interfaces are independent objects of the copyright protection, if they are used while running the computer program the legal regulation of computer programs has priority. Based on conclusions reached by the Supreme Administrative Court of the Czech Republic in the BSA case, the author claims that collective administration of graphical user interfaces is neither reasonable nor effective.


2020 ◽  
Vol 30 (5) ◽  
pp. 949-982 ◽  
Author(s):  
Henrietta Jylhä ◽  
Juho Hamari

Abstract Graphical user interfaces are widely common and present in everyday human–computer interaction, dominantly in computers and smartphones. Today, various actions are performed via graphical user interface elements, e.g., windows, menus and icons. An attractive user interface that adapts to user needs and preferences is progressively important as it often allows personalized information processing that facilitates interaction. However, practitioners and scholars have lacked an instrument for measuring user perception of aesthetics within graphical user interface elements to aid in creating successful graphical assets. Therefore, we studied dimensionality of ratings of different perceived aesthetic qualities in GUI elements as the foundation for the measurement instrument. First, we devised a semantic differential scale of 22 adjective pairs by combining prior scattered measures. We then conducted a vignette experiment with random participant (n = 569) assignment to evaluate 4 icons from a total of pre-selected 68 game app icons across 4 categories (concrete, abstract, character and text) using the semantic scales. This resulted in a total of 2276 individual icon evaluations. Through exploratory factor analyses, the observations converged into 5 dimensions of perceived visual quality: Excellence/Inferiority, Graciousness/Harshness, Idleness/Liveliness, Normalness/Bizarreness and Complexity/Simplicity. We then proceeded to conduct confirmatory factor analyses to test the model fit of the 5-factor model with all 22 adjective pairs as well as with an adjusted version of 15 adjective pairs. Overall, this study developed, validated, and consequently presents a measurement instrument for perceptions of visual qualities of graphical user interfaces and/or singular interface elements (VISQUAL) that can be used in multiple ways in several contexts related to visual human-computer interaction, interfaces and their adaption.


Author(s):  
Amber Wagner ◽  
Jeff Gray

Although Graphical User Interfaces (GUIs) often improve usability, individuals with physical disabilities may be unable to use a mouse and keyboard to navigate through a GUI-based application. In such situations, a Vocal User Interface (VUI) may be a viable alternative. Existing vocal tools (e.g., Vocal Joystick) can be integrated into software applications; however, integrating an assistive technology into a legacy application may require tedious and manual adaptation. Furthermore, the challenges are deeper for an application whose GUI changes dynamically (e.g., based on the context of the program) and evolves with each new application release. This paper provides a discussion of challenges observed while mapping a GUI to a VUI. The context of the authors' examples and evaluation are taken from Myna, which is the VUI that is mapped to the Scratch programming environment. Initial user studies on the effectiveness of Myna are also presented in the paper.


Author(s):  
Merissa Walkenstein ◽  
Ronda Eisenberg

This paper describes an experimental study that compares a graphical user interface for a computer-telephony product designed without the involvement of a human factors engineer to a redesign of that interface designed with a human factors engineer late in the development cycle. Both interfaces were usability tested with target customers. Results from a number of measures, both subjective and objective, indicate that the interface designed with the human factors engineer was easier to use than the interface designed without the human factors engineer. The results of this study show the benefits of involving human factors engineers in the design of graphical user interfaces even towards the end of a development cycle. However, this involvement is most effective when human factors engineers are included as an integral part of the design and development process even at this late stage in the process.


10.28945/3768 ◽  
2017 ◽  
Vol 16 ◽  
pp. 171-193 ◽  
Author(s):  
Alex Pugnali ◽  
Amanda Sullivan ◽  
Marina Umashi Bers

Aim/Purpose: Over the past few years, new approaches to introducing young children to computational thinking have grown in popularity. This paper examines the role that user interfaces have on children’s mastery of computational thinking concepts and positive interpersonal behaviors. Background: There is a growing pressure to begin teaching computational thinking at a young age. This study explores the affordances of two very different programming interfaces for teaching computational thinking: a graphical coding application on the iPad (ScratchJr) and tangible programmable robotics kit (KIBO). Methodology : This study used a mixed-method approach to explore the learning experiences that young children have with tangible and graphical coding interfaces. A sample of children ages four to seven (N = 28) participated. Findings: Results suggest that type of user interface does have an impact on children’s learning, but is only one of many factors that affect positive academic and socio-emotional experiences. Tangible and graphical interfaces each have qualities that foster different types of learning


2017 ◽  
Author(s):  
Rachel Opitz ◽  
Tyler Johnson

This paper discusses the authors’ approach to designing an interface for the Gabii Project’s digital volumes that attempts to fuse elements of traditional synthetic publications and site reports with rich digital datasets. Archaeology, and classical archaeology in particular, has long engaged with questions of the formation and lived experience of towns and cities. Such studies might draw on evidence of local topography, the arrangement of the built environment, and the placement of architectural details, monuments and inscriptions (e.g. Johnson and Millett 2012). Fundamental to the continued development of these studies is the growing body of evidence emerging from new excavations. Digital techniques for recording evidence “on the ground,” notably SFM (structure from motion aka close range photogrammetry) for the creation of detailed 3D models and for scene-level modeling in 3D have advanced rapidly in recent years. These parallel developments have opened the door for approaches to the study of the creation and experience of urban space driven by a combination of scene-level reconstruction models (van Roode et al. 2012, Paliou et al. 2011, Paliou 2013) explicitly combined with detailed SFM or scanning based 3D models representing stratigraphic evidence. It is essential to understand the subtle but crucial impact of the design of the user interface on the interpretation of these models. In this paper we focus on the impact of design choices for the user interface, and make connections between design choices and the broader discourse in archaeological theory surrounding the practice of the creation and consumption of archaeological knowledge. As a case in point we take the prototype interface being developed within the Gabii Project for the publication of the Tincu House. In discussing our own evolving practices in engagement with the archaeological record created at Gabii, we highlight some of the challenges of undertaking theoretically-situated user interface design, and their implications for the publication and study of archaeological materials.


2021 ◽  
Author(s):  
◽  
Pippin Barr

<p>User-interface metaphors are a widely used, but poorly understood, technique employed in almost all graphical user-interfaces. Although considerable research has gone into the applications of the technique, little work has been performed on the analysis of the concept itself. In this thesis, user-interface metaphor is defined and classified in considerable detail so as to make it more understandable to those who use it. The theoretical approach is supported by practical exploration of the concepts developed.</p>


User interface (UI) design is the process of making interfaces in software or computerized devices with a focus on looks or style. Designers aim to create designs users will find easy to use and pleasurable. IU design typically refers to graphical user interfaces but also includes others, such as voice-controlled ones. In this chapter, the user interface design and the grounded learning theories are discussed. Next, the interaction styles and the types of interactions are discussed. The usability benchmark and the usability evaluation instruments are also discussed in this chapter.


Author(s):  
Firas Bacha ◽  
Káthia Marçal de Oliveira ◽  
Mourad Abed

User Interface (UI) personalization aims at providing the right information, at the right time, and on the right support (tablets, smart-phone, etc.). Personalization can be performed on the interface elements’ presentation (e.g. layout, screen size, and resolution) and on the content provided (e.g., data, information, document). While many existing approaches deal with the first type of personalization, this chapter explores content personalization. To that end, the authors define a context-aware Model Driven Architecture (MDA) approach where the UI model is enriched by data from a domain model and its mapping to a context model. They conclude that this approach is better used only for domains where one envisions several developments of software applications and/or user interfaces.


2020 ◽  
Vol 9 (7) ◽  
pp. 412 ◽  
Author(s):  
Paweł Cybulski ◽  
Tymoteusz Horbiński

The purpose of this article is to show the differences in users’ experience when performing an interactive task with GUI buttons arrangement based on Google Maps and OpenStreetMap in a simulation environment. The graphical user interface is part of an interactive multimedia map, and the interaction experience depends mainly on it. For this reason, we performed an eye-tracking experiment with users to examine how people experience interaction through the GUI. Based on the results related to eye movement, we presented several valuable recommendations for the design of interactive multimedia maps. For better GUI efficiency, it is suitable to group buttons with similar functions in screen corners. Users first analyze corners and only then search for the desired button. The frequency of using a given web map does not translate into generally better performance while using any GUI. Users perform more efficiently if they work with the preferred GUI.


Sign in / Sign up

Export Citation Format

Share Document