scholarly journals Recombinant Sort: N-Dimensional Cartesian Spaced Algorithm Designed from Synergetic Combination of Hashing, Bucket, Counting and Radix Sort

2020 ◽  
Vol 25 (5) ◽  
pp. 655-668
Author(s):  
Peeyush Kumar ◽  
Ayushe Gangal ◽  
Sunita Kumari

Sorting is an essential operation which is widely used and is fundamental to some very basic day to day utilities like searches, databases, social networks and much more. Optimizing this basic operation in terms of complexity as well as efficiency is cardinal. Optimization is achieved with respect to space and time complexities of the algorithm. In this paper, a novel left-field N-dimensional cartesian spaced sorting method is proposed by combining the best characteristics of bucket sort, counting sort and radix sort, in addition to employing hashing and dynamic programming for making the method more efficient. Comparison between the proposed sorting method and various existing sorting methods like bubble sort, insertion sort, selection sort, merge sort, heap sort, counting sort, bucket sort, etc., has also been performed. The time complexity of the proposed model is estimated to be linear i.e.

Sorting is an essential conceptin the study of data structures. There are many sorting algorithms that can sort elements in a given array or list. Counting sort is a sorting algorithm that has the best time complexity. However, the counting sort algorithm only works for positive integers. In this paper, an extension of the counting sort algorithm is proposed that can sort real numbers and integers (both positive and negative).


2017 ◽  
Vol 9 (3) ◽  
pp. 30
Author(s):  
Youssouf Ahamada ◽  
Salimata G. Diagne ◽  
Amadou Coulibaly ◽  
D'ethi'e Dione ◽  
N'dogotar Nlio ◽  
...  

In this article, we proposed a programming linear model in integer numbers(PLIN) for the optimal allocation of the time slots in the  international  Leopold Sedar Senghor airport of Dakar (L.S.S). The slots  are specific allocated periods which allow an aircraft to land or take off in a saturated airport. Their attribution depends on theconfiguration of the airport, more particularly on its capacity. We maximize the confirmed demand in each slot and take the number of aircrafts and the number of manageable passengers with an optimal quality service into account. We used the CPLEX software so that to test the effectiveness of the linear model. Firstly, in the proposed model linear in integer numbers, any unmet demand was isolated. Secondly, the rejected demands  by introducing a model and an algorithm of resolution based on the dynamic programming.


Author(s):  
Ammar Alnahhas ◽  
Bassel Alkhatib

As the data on the online social networks is getting larger, it is important to build personalized recommendation systems that recommend suitable content to users, there has been much research in this field that uses conceptual representations of text to match user models with best content. This article presents a novel method to build a user model that depends on conceptual representation of text by using ConceptNet concepts that exceed the named entities to include the common-sense meaning of words and phrases. The model includes the contextual information of concepts as well, the authors also show a novel method to exploit the semantic relations of the knowledge base to extend user models, the experiment shows that the proposed model and associated recommendation algorithms outperform all previous methods as a detailed comparison shows in this article.


2020 ◽  
Vol 37 (06) ◽  
pp. 2050034
Author(s):  
Ali Reza Sepasian ◽  
Javad Tayyebi

This paper studies two types of reverse 1-center problems under uniform linear cost function where edge lengths are allowed to reduce. In the first type, the aim is that the objective value is bounded by a prescribed fixed value [Formula: see text] at minimum cost. The aim of the other is to improve the objective value as much as possible within a given budget. An algorithm based on dynamic programming is proposed to solve the first problem in linear time. Then, this algorithm is applied as a subroutine to design an algorithm to solve the second type of the problem in [Formula: see text] time in which [Formula: see text] is a fixed number dependent on the problem parameters. Under the similarity assumption, this algorithm has a better complexity than the Nguyen algorithm (2013) with quadratic-time complexity. Some numerical experiments are conducted to validate this fact in practice.


Author(s):  
Hussein Moselhy Sayed Ahmed

Viral marketing has become a conduit for today's organizations and an important pillar for managing the organization and a source that enhances its competitiveness and creates new opportunities for organizations through which they are trying to achieve competitive advantages to obtain new market shares. So, this study provides insight into how social network influence on purchasing decision through viral marketing and knowledge sharing on social networking sites (SNSs). By using the sample from 650 Egyptian college students - who spend more time on SNSs, this study investigates the relationship among the use of SNSs, users' social relationships, online word-of-mouth, and knowledge sharing. Therefore, this paper is working on the study of the impact of viral marketing through social networks on consumer buying decisions, and working on the development of a proposed model to measure this effect.


2020 ◽  
Vol 30 (6) ◽  
pp. 1239-1255
Author(s):  
Merlin Carl

Abstract We consider notions of space by Winter [21, 22]. We answer several open questions about these notions, among them whether low space complexity implies low time complexity (it does not) and whether one of the equalities P=PSPACE, P$_{+}=$PSPACE$_{+}$ and P$_{++}=$PSPACE$_{++}$ holds for ITTMs (all three are false). We also show various separation results between space complexity classes for ITTMs. This considerably expands our earlier observations on the topic in Section 7.2.2 of Carl (2019, Ordinal Computability: An Introduction to Infinitary Machines), which appear here as Lemma $6$ up to Corollary $9$.


2017 ◽  
Vol 26 (3) ◽  
pp. 347-366 ◽  
Author(s):  
Arnaldo Mario Litterio ◽  
Esteban Alberto Nantes ◽  
Juan Manuel Larrosa ◽  
Liliana Julia Gómez

Purpose The purpose of this paper is to use the practical application of tools provided by social network theory for the detection of potential influencers from the point of view of marketing within online communities. It proposes a method to detect significant actors based on centrality metrics. Design/methodology/approach A matrix is proposed for the classification of the individuals that integrate a social network based on the combination of eigenvector centrality and betweenness centrality. The model is tested on a Facebook fan page for a sporting event. NodeXL is used to extract and analyze information. Semantic analysis and agent-based simulation are used to test the model. Findings The proposed model is effective in detecting actors with the potential to efficiently spread a message in relation to the rest of the community, which is achieved from their position within the network. Social network analysis (SNA) and the proposed model, in particular, are useful to detect subgroups of components with particular characteristics that are not evident from other analysis methods. Originality/value This paper approaches the application of SNA to online social communities from an empirical and experimental perspective. Its originality lies in combining information from two individual metrics to understand the phenomenon of influence. Online social networks are gaining relevance and the literature that exists in relation to this subject is still fragmented and incipient. This paper contributes to a better understanding of this phenomenon of networks and the development of better tools to manage it through the proposal of a novel method.


2004 ◽  
Vol 14 (6) ◽  
pp. 669-680
Author(s):  
PETER LJUNGLÖF

This paper implements a simple and elegant version of bottom-up Kilbury chart parsing (Kilbury, 1985; Wirén, 1992). This is one of the many chart parsing variants, which are all based on the data structure of charts. The chart parsing process uses inference rules to add new edges to the chart, and parsing is complete when no further edges can be added. One novel aspect of this implementation is that it doesn't have to rely on a global state for the implementation of the chart. This makes the code clean, elegant and declarative, while still having the same space and time complexity as the standard imperative implementations.


1996 ◽  
Vol 11 (2) ◽  
pp. 115-144 ◽  
Author(s):  
Johann Blieberger ◽  
Roland Lieger

Sign in / Sign up

Export Citation Format

Share Document