Graphical Data Sets as Compositional Structure: Sonification of Colour Graphs in RGB for Clarinet and Piano

Leonardo ◽  
2020 ◽  
pp. 1-14
Author(s):  
Thomas Metcalf

This article will follow the methodology behind the composition of the author's piece, RGB (2019), for clarinet and piano; a sonification of four colour graphs generated from Pollock's Out of the Web (1949). It will demonstrate the process of ‘mapping’ data to sound whilst creating allowances for compositional intuition. In this way, the author hopes to demonstrate the usefulness and flexibility of composing with this approach, as well as its future implications and improvements, whilst acknowledging that this is a specific example of such an approach, rather than an all-encompassing taxonomy for any visual input.

Genetics ◽  
1976 ◽  
Vol 83 (2) ◽  
pp. 341-354
Author(s):  
Burt Singer ◽  
Ruth Sager ◽  
Zenta Ramanis

ABSTRACT A novel mapping procedure is presented for organelle genes or any other genetic system exhibiting a measurable frequency of exchanges occurring at a constant rate over a measurable time interval. For a set of markers in a multiply-marked cross, the exchange rates measure relative map distances from a centromere-like attachment point. With this method, we present mapping data and a linear map of genes in the chlcroplast genome of Chlamydomonas. The data are plotted as log (percent remaining heterozygotes) against time and map distances are taken as proportional to slope. A statistical method which is an adaptation of jackknife methodology to a regression problem was developed to estimate slope values. A single line is fitted to pooled data for each marker from several crosses, and then lines are re-fit to a series of pooled data sets in each of which the observations from a single cross have been omitted. From these data sets a final summary slope is computed as well as a statement of its variability. The relative positions of new markers present in single crosses can then be estimated utilizing data from many crosses. The method does not distinguish between one-armed and two-armed linear or circular maps. However, evaluation of this map in conjunction with cosegregation frequency data (Sager and Ramanis 1976b) provides unambiguous evidence of the genetic circularity of the Chlamydomonas chloroplast genome.


Author(s):  
Heiko Paulheim ◽  
Christian Bizer

Linked Data on the Web is either created from structured data sources (such as relational databases), from semi-structured sources (such as Wikipedia), or from unstructured sources (such as text). In the latter two cases, the generated Linked Data will likely be noisy and incomplete. In this paper, we present two algorithms that exploit statistical distributions of properties and types for enhancing the quality of incomplete and noisy Linked Data sets: SDType adds missing type statements, and SDValidate identifies faulty statements. Neither of the algorithms uses external knowledge, i.e., they operate only on the data itself. We evaluate the algorithms on the DBpedia and NELL knowledge bases, showing that they are both accurate as well as scalable. Both algorithms have been used for building the DBpedia 3.9 release: With SDType, 3.4 million missing type statements have been added, while using SDValidate, 13,000 erroneous RDF statements have been removed from the knowledge base.


2021 ◽  
pp. 08-14
Author(s):  
Nafea ali majeed .. ◽  
◽  
◽  
◽  
Khalid Hameed Zaboon ◽  
...  

Recently, the technology become an important part of our live, and it is employed to work together with the Medicine, Space Science, Agriculture, and industry and more else. Stored the information in the servers and cloud become required. It is a global force that has transformed people's lives with the availability of various web applications that serve billions of websites every day. However, there are many types of attack could be targeting the internet, and there is a need to recognize, classify and protect thesis types of attack. Due to its important global role, it has become important to ensure that web applications are secure, accurate, and of high quality. One of the basic problems found on the Web is DDoS attacks. In this work, the review classifies and delineates attack types, test characteristics, evaluation techniques; evaluation methods and test data sets used in the proposed Strategic Strategy methodology. Finally, this work affords guidance and possible targets in the fight against creating better events to overcome the most dangers Cyber-attack types which is DDoS attacks.


Author(s):  
Robert Akinade Awoyemi

The research explores the extent to which academic libraries in Nigeria are using mobile technologies for the delivery of its information and research services, and the impact these technologies may have on the professional development needs of librarians. Using a mixed method design approach, two data sets were investigated. First, the web-based library homepages of 15 tertiary education libraries in South-west Nigeria were examined for their level of conformance to a mobile platform and second, library staff from the 15 academic libraries were surveyed for their perceptions of, and experiences in, using mobile technology both within a social context and within the workplace. This research found that while mobile technologies are in use by the majority of academic libraries to a degree, lack of resources and awareness of new innovations were identified as barriers to providing mobile services that meet users' needs and expectations.


2017 ◽  
Vol 7 (1.1) ◽  
pp. 286
Author(s):  
B. Sekhar Babu ◽  
P. Lakshmi Prasanna ◽  
P. Vidyullatha

 In current days, World Wide Web has grown into a familiar medium to investigate the new information, Business trends, trading strategies so on. Several organizations and companies are also contracting the web in order to present their products or services across the world. E-commerce is a kind of business or saleable transaction that comprises the transfer of statistics across the web or internet. In this situation huge amount of data is obtained and dumped into the web services. This data overhead tends to arise difficulties in determining the accurate and valuable information, hence the web data mining is used as a tool to determine and mine the knowledge from the web. Web data mining technology can be applied by the E-commerce organizations to offer personalized E-commerce solutions and better meet the desires of customers. By using data mining algorithm such as ontology based association rule mining using apriori algorithms extracts the various useful information from the large data sets .We are implementing the above data mining technique in JAVA and data sets are dynamically generated while transaction is processing and extracting various patterns.


2013 ◽  
Vol 5 (2) ◽  
pp. 365-373 ◽  
Author(s):  
H. Keller-Rudek ◽  
G. K. Moortgat ◽  
R. Sander ◽  
R. Sörensen

Abstract. We present the MPI-Mainz UV/VIS Spectral Atlas of Gaseous Molecules, which is a large collection of absorption cross sections and quantum yields in the ultraviolet and visible (UV/VIS) wavelength region for gaseous molecules and radicals primarily of atmospheric interest. The data files contain results of individual measurements, covering research of almost a whole century. To compare and visualize the data sets, multicoloured graphical representations have been created. The MPI-Mainz UV/VIS Spectral Atlas is available on the Internet at http://www.uv-vis-spectral-atlas-mainz.org. It now appears with improved browse and search options, based on new database software. In addition to the Web pages, which are continuously updated, a frozen version of the data is available under the doi:10.5281/zenodo.6951.


First Monday ◽  
2017 ◽  
Vol 22 (4) ◽  
Author(s):  
Sarah Kreiseler ◽  
Viktoria Brüggemann ◽  
Marian Dörk

Museums are broadening their program beyond the physical institutions by providing digital collections online. In digital collections, objects are prepared and presented particularly for the Web and the ambition is to provide the entirety of a physical collection. To make these rich and comprehensive data sets accessible, an explore mode is increasingly offered. The present study considers this mode, first by making sense of the term “exploration” and suggesting four functional principles in support of exploration in digital collections — view, movement, contextualization, and participation. On this basis, we compare eight well-known museums with regard to the explore modes for their digital collections. We have devised a three-part methodology, reverse information architecture, to address the question: How is the function of exploration manifested in the structure and interface elements of digital collections? With this unique method we use the given content to investigate how far the four principles are implemented in explore modes of digital collections and, broadly said, how explorable they are. The introduced approach to studying digital collections could be opened up to other fields to analyze a variety of Web interfaces in general.


2016 ◽  
Author(s):  
Stephen Romansky ◽  
Sadegh Charmchi ◽  
Abram Hindle

The business models of software/platform as a service have contributed to developers dependence on the Internet. Developers can rapidly point each other and consumers to the newest software changes with the power of the hyper link. But, developers are not limited to referencing software changes to one another through the web. Other shared hypermedia might include links to: Stack Overflow, Twitter, and issue trackers. This work explores the software traceability of Uniform Resource Locators (URLs) which software developers leave in commit messages and software repositories. URLs are easily extracted from commit messages and source code. Therefore, it would be useful to researchers if URLs provide additional insight on project development. To assess traceability, manual topic labelling is evaluated against automated topic labelling on URL data sets. This work also shows differences between URL data collected from commit messages versus URL data collected from source code. As well, this work explores outlying software projects with many URLs in case these projects do not provide meaningful software relationship information. Results from manual topic labelling show promise under evaluation while automated topic labelling did not yield precise topics. Further investigation of manual and automated topic analysis would be useful.


Author(s):  
Qusay Abdullah Abed ◽  
Osamah Mohammed Fadhil ◽  
Wathiq Laftah Al-Yaseen

In general, multidimensional data (mobile application for example) contain a large number of unnecessary information. Web app users find it difficult to get the information needed quickly and effectively due to the sheer volume of data (big data produced per second). In this paper, we tend to study the data mining in web personalization using blended deep learning model. So, one of the effective solutions to this problem is web personalization. As well as, explore how this model helps to analyze and estimate the huge amounts of operations. Providing personalized recommendations to improve reliability depends on the web application using useful information in the web application. The results of this research are important for the training and testing of large data sets for a map of deep mixed learning based on the model of back-spread neural network. The HADOOP framework was used to perform a number of experiments in a different environment with a learning rate between -1 and +1. Also, using the number of techniques to evaluate the number of parameters, true positive cases are represent and fall into positive cases in this example to evaluate the proposed model.


Sign in / Sign up

Export Citation Format

Share Document