scholarly journals Café and Restaurant under My Home: Predicting Urban Commercialization through Machine Learning

2021 ◽  
Vol 13 (10) ◽  
pp. 5699
Author(s):  
Seung-Chul Noh ◽  
Jung-Ho Park

The small commercial stores opening in housing structures in Seoul have been soaring since the beginning of this century. While commercialization generally increases urban vitality and achieves land use mix, cafés and restaurants in low-rise residential areas may attract numerous passenger populations, with increased noise and crimes, in the residential area. The urban commercialization is so fast and prevalent that neither urban researchers nor policymakers can respond to it timely without a practical prediction tool. Focusing on cafés and restaurants, we propose an XGBoost machine learning model that can predict commercial store openings in urban residential areas and further play the role of an early warning system. Our findings highlight a large degree of difference in the predictor importance between the variables used in our machine learning model. The most important predictor relates to land price, indicating that economic motivation leads to the conversion of urban housing to small cafés and restaurants. The Mapo neighborhood is predicted to be the most prone to the commercialization of urban housing, therefore, its urgency to be prepared against expected commercialization deserves underscoring. Overall, our results show that the machine learning approach can be applied to predict changes in land uses and contribute to timely policy designs in rapidly changing urban context.

2021 ◽  
Vol 145 ◽  
pp. 104311
Author(s):  
Harun Olcay Sonkurt ◽  
Ali Ercan Altınöz ◽  
Emre Çimen ◽  
Ferdi Köşger ◽  
Gürkan Öztürk

2019 ◽  
Author(s):  
Abdul Karim ◽  
Vahid Riahi ◽  
Avinash Mishra ◽  
Abdollah Dehzangi ◽  
M. A. Hakim Newton ◽  
...  

Abstract Representing molecules in the form of only one type of features and using those features to predict their activities is one of the most important approaches for machine-learning-based chemical-activity-prediction. For molecular activities like quantitative toxicity prediction, the performance depends on the type of features extracted and the machine learning approach used. For such cases, using one type of features and machine learning model restricts the prediction performance to specific representation and model used. In this paper, we study quantitative toxicity prediction and propose a machine learning model for the same. Our model uses an ensemble of heterogeneous predictors instead of typically using homogeneous predictors. The predictors that we use vary either on the type of features used or on the deep learning architecture employed. Each of these predictors presumably has its own strengths and weaknesses in terms of toxicity prediction. Our motivation is to make a combined model that utilizes different types of features and architectures to obtain better collective performance that could go beyond the performance of each individual predictor. We use six predictors in our model and test the model on four standard quantitative toxicity benchmark datasets. Experimental results show that our model outperforms the state-of-the-art toxicity prediction models in 8 out of 12 accuracy measures. Our experiments show that ensembling heterogeneous predictor improves the performance over single predictors and homogeneous ensembling of single predictors.The results show that each data representation or deep learning based predictor has its own strengths and weaknesses, thus employing a model ensembling multiple heterogeneous predictors could go beyond individual performance of each data representation or each predictor type.


2020 ◽  
Vol 23 (4) ◽  
pp. 3233-3253 ◽  
Author(s):  
Rahim Taheri ◽  
Reza Javidan ◽  
Mohammad Shojafar ◽  
P. Vinod ◽  
Mauro Conti

2021 ◽  
Author(s):  
Li Linwei ◽  
Yiping Wu ◽  
Miao Fasheng ◽  
Xue Yang ◽  
Huang Yepiao

Abstract Constructing an accurate and stable displacement prediction model is essential to build a capable early warning system for landslide disasters. To overcome the drawbacks of previous displacement prediction models for step-like landslides, such as the incomplete or excessive decompositions of cumulative displacements and input factors and the redundancy or lack of input factors, we propose an adaptive hybrid machine learning model. This model is composed of three parts. First, candidate factors are proposed based on the macroscopic deformation response of landslides. Then, the landslide displacement and its candidate factors are adaptively decomposed into different displacement and factor components by applying optimized variational mode decomposition (OVMD). Second, in the gray wolf optimizer-based kernel extreme learning machine (GWO-KELM) model, the global sensitivity analysis (GSA) of the prediction results of different displacement components to each decomposed factor is analyzed based on the PAWN method. Then, the decomposed factors are reduced according to the GSA results. Third, based on the reduced factors, the optimal GWO-KELM models of the different displacement components are established to predict the displacement. Taking the Baishuihe landslide as an example, we used the raw data of three representative monitoring sites from June 2006 to December 2016 to verify the validity, accuracy, and stability of the model. The results indicate that the proposed hybrid model can effectively determine the displacement decomposition parameters. In addition, this model performed well over a three-year forecast with low model complexity.


Author(s):  
C. Selvi ◽  
R. Shalini ◽  
V. Navaneethan ◽  
L. Santhiya

An University’s reputation and its standard are weighted by its students performance and their part in the future economic prosperity of the nation, hence a novel method of predicting the student’s upcoming academic performance is really essential to provide a pre-requisite information upon their performances. A machine learning model can be developed to predict the student’s upcoming scores or their entire performance depending upon their previous academic performances.


Author(s):  
K. Bret Staudt Willet ◽  
Brooks D. Willet

Twitter has become a hub for many different types of educational conversations, denoted by hashtags and organized by a variety of affinities. Researchers have described these educational conversations on Twitter as sites for teacher professional development. Here, we studied #Edchat—one of the oldest and busiest Twitter educational hashtags—to examine the content of contributions for evidence of professional purposes. We collected tweets containing the text “#edchat” from October 1, 2017 to June 5, 2018, resulting in a dataset of 1,228,506 unique tweets from 196,263 different contributors. Through initial human-coded content analysis, we sorted a stratified random sample of 1,000 tweets into four inductive categories: tweets demonstrating evidence of different professional purposes related to (a) self, (b) others, (c) mutual engagement, and (d) everything else. We found 65% of the tweets in our #Edchat sample demonstrated purposes related to others, 25% demonstrated purposes related to self, and 4% of tweets demonstrated purposes related to mutual engagement. Our initial method was too time intensive—it would be untenable to collect tweets from 339 known Twitter education hashtags and conduct human-coded content analysis of each. Therefore, we are developing a scalable machine-learning model—a multiclass logistic regression classifier using an input matrix of features such as tweet types, keywords, sentiment, word count, hashtags, hyperlinks, and tweet metadata. The anticipated product of this research—a successful, generalizable machine learning model—would help educators and researchers quickly evaluate Twitter educational hashtags to determine where they might want to engage.


2018 ◽  
Vol 20 (5) ◽  
pp. 1131-1147 ◽  
Author(s):  
N. Caradot ◽  
M. Riechel ◽  
M. Fesneau ◽  
N. Hernandez ◽  
A. Torres ◽  
...  

Abstract Deterioration models can be successfully deployed only if decision-makers trust the modelling outcomes and are aware of model uncertainties. Our study aims to address this issue by developing a set of clearly understandable metrics to assess the performance of sewer deterioration models from an end-user perspective. The developed metrics are used to benchmark the performance of a statistical model, namely, GompitZ based on survival analysis and Markov-chains, and a machine learning model, namely, Random Forest, an ensemble learning method based on decision trees. The models have been trained with the extensive CCTV dataset of the sewer network of Berlin, Germany (115,258 inspections). At network level, both models give satisfactory outcomes with deviations between predicted and inspected condition distributions below 5%. At pipe level, the statistical model does not perform better than a simple random model, which attributes randomly a condition class to each inspected pipe, whereas the machine learning model provides satisfying performance. 66.7% of the pipes inspected in bad condition have been predicted correctly. The machine learning approach shows a strong potential for supporting operators in the identification of pipes in critical condition for inspection programs whereas the statistical approach is more adapted to support strategic rehabilitation planning.


2021 ◽  
Author(s):  
maria dominic ◽  
DEEPA T

Abstract The role of approximate arithmetic are involved when the processors are used for multimedia signal processing application. The impact of multiplier is very important in many processes done by these processors. The compressors are the core architecture for reduction stage if the multiplier width is increased. Later approximations are done in the compressor to limited error without affecting the signal standard. The design of scalable-split compressor is designed in this work and a counter matching method has been developed for approximation. The design of 32x32 and 16x16 multiplier with these new compressors are synthesised in 45nm Synopsis Design Compiler and shows an improvement of 25 % of Chip area and 27% power. The split-scalable architecture attempts to reduce the delay with trade-off in area and power. Mean Error Distance (MED) and Normalized Error Distance (NED) are the parameters that ensure the quality of any approximate arithmetic based design. 16-bit medical images are processed with both existing and proposed multipliers then the Peak Signal to Noise Ratio (PSNR) is compared. Finally with several input nature and targeted PSNR the best system is identified using classical machine learning model.


Sign in / Sign up

Export Citation Format

Share Document