scholarly journals Development of Word Game Algorithm for Learning Javanese Script

Author(s):  
Ozzi Suria

The students consider learning Javanese script to be difficult particularly in distinguishing and memorizing Carakan, and memorizing Sandangan and Pasangan with its writing rules. This work intends to develop a supporting medium for learning Javanese script. The development process is started by defining the game functionalities by using the use-case diagrams, and then, the activity diagram is created to describe the workflow of the game algorithm. The database to support the game is also created and displayed by using the physical data model. Afterward, the game algorithm script is created using JavaScript so that the game can be played through a web browser. There are 27 respondents requested to test the game and to fill in questionnaires about the web application. The results suggest that 100%of respondents agree that the web application is necessary and useful to learn Javanese script. The application provides positive benefit to the users such as students who still need to learn Javanese script in schools with 97% average success rate to run the game 

i-com ◽  
2008 ◽  
Vol 6 (3/2007) ◽  
pp. 23-29 ◽  
Author(s):  
Birgit Bomsdorf

SummaryTask modelling has entered the development process of web applications, strengthening the usage-centred view within the early steps in Web-Engineering (WE). In current approaches, however, this view is not kept up during subsequent activities to the same degree as this is the case in the field of Human-Computer-Interaction (HCI). The modelling approach presented in this contribution combines models as known from WE with models used in HCI to change this situation. Basically the WE-HCI-integration is supported by combining task and object models as known from HCI with conceptual modelling known from WE. In this paper, the main focus is on the WebTaskModel, a task model adapted to web application concerns, and its contribution towards a task-related web user interface. The main difference to existing task models is the build-time and run-time usage of a generic task lifecycle. Hereby the description of exceptions and erroneous situations during task performance (caused by, e.g., the stateless protocol or Browser interaction) is enabled and at the same time clearly separated from the flow of correct action.


2021 ◽  
Author(s):  
Florian Auer ◽  
Simone Mayer ◽  
Frank Kramer

Networks are a common methodology used to capture increasingly complex associations between biological entities. They serve as a resource of biological knowledge for bioinformatics analyses, and also comprise the subsequent results. However, the interpretation of biological networks is challenging and requires suitable visualizations dependent on the contained information. The most prominent software in the field for the visualization of biological networks is Cytoscape, a desktop modeling environment also including many features for analysis. A further challenge when working with networks is their distribution. Within a typical collaborative workflow, even slight changes of the network data force one to repeat the visualization step as well. Also, just minor adjustments to the visual representation not only need the networks to be transferred back and forth. Collaboration on the same resources requires specific infrastructure to avoid redundancies, or worse, the corruption of the data. A well-established solution is provided by the NDEx platform where users can upload a network, share it with selected colleagues or make it publicly available. NDExEdit is a web-based application where simple changes can be made to biological networks within the browser, and which does not require installation. With our tool, plain networks can be enhanced easily for further usage in presentations and publications. Since the network data is only stored locally within the web browser, users can edit their private networks without concerns of unintentional publication. The web tool is designed to conform to the Cytoscape Exchange (CX) format as a data model, which is used for the data transmission by both tools, Cytoscape and NDEx. Therefore the modified network can be exported as a compatible CX file, additionally to standard image formats like PNG and JPEG.


Author(s):  
G Saibaba ◽  
Prasanth Vaidya

<p>In this era of internet, e-commerce is growing by leaps and bounds keeping the growth of brick-and-mortar businesses in the dust. In many cases, brick-and-mortar businesses are resorting to having a counterpart which is internet or e-commerce driven. People in the developed world and a growing number of people in the developing world now use ecommerce websites on a daily basis to make their everyday purchases. Still the proliferation of e-commerce in the underdeveloped world is not that great and there is a lot to desire for It consists of the planning process, which starts with determining the use case, domain modeling and architectural pattern of the web application. The entire development process is primarily divided into two parts: the front-end development and the back end development. The database design is also discussed with an emphasis on its relational connectivity.</p>


Author(s):  
M. M. Murad ◽  
M. W. Ashraf1

In recent time, the evolution of web applications have gained importance over the web development process and the factor of web evolution cannot be ignored by web developers. Web development has become complex and challengeable for web developers. The process of software evolution played an important role during the development of the software. Millions of web application have been developed every year around the world It has included various approaches, tools, and frameworks to reorganize the web applications with an improved version. Research has been shown that there are no proper and systematic techniques is available for evolving web applications. This special article has been written to make a comparative analysis of WordPress and Django web framework using Lehman’s laws of software evolution. It has been found that the six out of eight Lehman’s laws found valid during the evolution process for web frameworks.


2020 ◽  
Vol 1 (2) ◽  
pp. 72-85
Author(s):  
Angelica Lo Duca ◽  
Andrea Marchetti

Within the field of Digital Humanities, a great effort has been made to digitize documents and collections in order to build catalogs and exhibitions on the Web. In this paper, we present WeME, a Web application for building a knowledge base, which can be used to describe digital documents. WeME can be used by different categories of users: archivists/librarians and scholars. WeME extracts information from some well-known Linked Data nodes, i.e. DBpedia and GeoNames, as well as traditional Web sources, i.e. VIAF. As a use case of WeME, we describe the knowledge base related to the Christopher Clavius’s corre spondence. Clavius was a mathematician and an astronomer of the XVI Century. He wrote more than 300 letters, most of which are owned by the Historical Archives of the Pontifical Gregorian University (APUG) in Rome. The built knowledge base contains 139 links to DBpedia, 83 links to GeoNames and 129 links to VIAF. In order to test the usability of WeME, we invited 26 users to test the application.


Author(s):  
Annisa Dwi Oktavianita ◽  
Hendra Dea Arifin ◽  
Muhammad Dzulfikar Fauzi ◽  
Aulia Faqih Rifa'i

A RAM or formerly known as a memory is a primary memory which helps swift data availability without waiting the whole data processed by the hard disk. A memory is also used by all installed applications including web browsers but there have been disappointed in cases of memory usages. Researchers use a descriptive quantitative approach with an observation, a central tendency and a dispersion method. There are 15 browsers chosen by random to be tested with low, medium and high loads to get their memory usage logs. Researchers proceed to analyze the log by using descriptive statistics to measure the central tendency and dispersion of data. A standard reference value from web application memory usage has been found as much as 393.38 MB. From that point, this research is successful and has been found the result. The web browser with the lowest memory usage is Flock with 134.67 MB and the web browser with the highest memory usage is Baidu with 699.66 MB.


2018 ◽  
Vol 7 (2.30) ◽  
pp. 6
Author(s):  
Daljit Kaur ◽  
Dr Parminder Kaur

With the growth of web and Internet, every era of human life has been affected. People want to make their or their organization’s presence globally visible through this medium. Web applications and/or mobile apps are used for the purpose of making their recognition as well as to attract the clients worldwide. With the demand of putting the business or services online faster than anyone else, web applications are developed in hustle and under pressure by developers and most of the times they ignore the few essential activities for securing them from severe attacks, which may be a greater loss for the business. This work is an effort to understand the complex distributed environment of web applications and show the impact of husting the web development process.  


2020 ◽  
Vol 4 (2) ◽  
pp. 7
Author(s):  
Taufiq Iqbal ◽  
Syarifuddin Syarifuddin

The purpose of this research is to build a repository model and feature the Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) and the Metadata Encoding and Transmission Standard (METS) and MPEG-21 Digital Item Declaration Language (DIDL). The research model used is qualitative research and methods. Application development used is Fourth Generation Techniques (4GT). From the results of the development of the repository by involving the Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) module on the Metadata Encoding and Transmission Standard (METS) and MPEG-21 Digital Item Declaration Language (DIDL), it has been applied to the repository application that was built. The test results using the OAI-PMH URL using the OVAL validator tool found that there were no problems and problems in validating and verifying data in the Identify, ListMetadataFormats, ListSets, ListIdentifiers, ListRecords, and XML Validation commands. While the test results show the success rate in crawling each metadata in the web repository, the average success rate of crawling metadata by Google Scholar is 90%, while the error is known to be 10% because some documents do not have complete metadata such as bibliography and uploaded documents.


Author(s):  
S. Vitalis ◽  
A. Labetski ◽  
F. Boersma ◽  
F. Dahle ◽  
X. Li ◽  
...  

Abstract. As web applications become more popular, 3D city models would greatly benefit from a proper web-based solution to visualise and manage them. CityJSON was introduced as a JSON encoding of the CityGML data model and promises, among several benefits, the ability to be integrated with modern web technologies. In order to provide an implementation of a web application for CityJSON data, that can be used as a reference for other applications, we developed ninja. It is a web application that allows the user to easily load and investigate a CityJSON model through a web browser. In addition, it offers support for a complex feature of CityJSON: the experimental versioning mechanism. In this paper, we describe the motivation, requirements, technical aspects and achieved functionality of ninja. We believe that such a web application can facilitate the adoption of 3D city models by more practitioners and decision makers.


Sign in / Sign up

Export Citation Format

Share Document