scholarly journals An Anthropocentric and Enhanced Predictive Approach to Smart City Management

Smart Cities ◽  
2021 ◽  
Vol 4 (4) ◽  
pp. 1366-1390
Author(s):  
Davide Carneiro ◽  
António Amaral ◽  
Mariana Carvalho ◽  
Luís Barreto

Cities are becoming increasingly complex to manage, as they increase in size and must provide higher living standards for their populations. New technology-based solutions must be developed towards attending this growth and ensuring that it is socially sustainable. This paper puts forward the notion that these solutions must share some properties: they should be anthropocentric, holistic, horizontal, multi-dimensional, multi-modal, and predictive. We propose an architecture in which streaming data sources that characterize the city context are used to feed a real-time graph of the city’s assets and states, as well as to train predictive models that hint into near future states of the city. This allows human decision-makers and automated services to take decisions, both for the present and for the future. To achieve this, multiple data sources about a city were gradually connected to a message broker, that enables increasingly rich decision-support. Results show that it is possible to predict future states of a city, in aspects such as traffic, air pollution, and other ambient variables. The key innovative aspect of this work is that, as opposed to the majority of existing approaches which focus on a real-time view of the city, we also provide insights into the near-future state of the city, thus allowing city services to plan ahead and adapt accordingly. The main goal is to optimize decision-making by anticipating future states of the city and make decisions accordingly.

2011 ◽  
pp. 1323-1331
Author(s):  
Jeffrey W. Seifert

A significant amount of attention appears to be focusing on how to better collect, analyze, and disseminate information. In doing so, technology is commonly and increasingly looked upon as both a tool, and, in some cases, a substitute, for human resources. One such technology that is playing a prominent role in homeland security initiatives is data mining. Similar to the concept of homeland security, while data mining is widely mentioned in a growing number of bills, laws, reports, and other policy documents, an agreed upon definition or conceptualization of data mining appears to be generally lacking within the policy community (Relyea, 2002). While data mining initiatives are usually purported to provide insightful, carefully constructed analysis, at various times data mining itself is alternatively described as a technology, a process, and/or a productivity tool. In other words, data mining, or factual data analysis, or predictive analytics, as it also is sometimes referred to, means different things to different people. Regardless of which definition one prefers, a common theme is the ability to collect and combine, virtually if not physically, multiple data sources, for the purposes of analyzing the actions of individuals. In other words, there is an implicit belief in the power of information, suggesting a continuing trend in the growth of “dataveillance,” or the monitoring and collection of the data trails left by a person’s activities (Clarke, 1988). More importantly, it is clear that there are high expectations for data mining, or factual data analysis, being an effective tool. Data mining is not a new technology but its use is growing significantly in both the private and public sectors. Industries such as banking, insurance, medicine, and retailing commonly use data mining to reduce costs, enhance research, and increase sales. In the public sector, data mining applications initially were used as a means to detect fraud and waste, but have grown to also be used for purposes such as measuring and improving program performance. While not completely without controversy, these types of data mining applications have gained greater acceptance. However, some national defense/homeland security data mining applications represent a significant expansion in the quantity and scope of data to be analyzed. Moreover, due to their security-related nature, the details of these initiatives (e.g., data sources, analytical techniques, access and retention practices, etc.) are usually less transparent.


Author(s):  
J. W. Seifert

A significant amount of attention appears to be focusing on how to better collect, analyze, and disseminate information. In doing so, technology is commonly and increasingly looked upon as both a tool, and, in some cases, a substitute, for human resources. One such technology that is playing a prominent role in homeland security initiatives is data mining. Similar to the concept of homeland security, while data mining is widely mentioned in a growing number of bills, laws, reports, and other policy documents, an agreed upon definition or conceptualization of data mining appears to be generally lacking within the policy community (Relyea, 2002). While data mining initiatives are usually purported to provide insightful, carefully constructed analysis, at various times data mining itself is alternatively described as a technology, a process, and/or a productivity tool. In other words, data mining, or factual data analysis, or predictive analytics, as it also is sometimes referred to, means different things to different people. Regardless of which definition one prefers, a common theme is the ability to collect and combine, virtually if not physically, multiple data sources, for the purposes of analyzing the actions of individuals. In other words, there is an implicit belief in the power of information, suggesting a continuing trend in the growth of “dataveillance,” or the monitoring and collection of the data trails left by a person’s activities (Clarke, 1988). More importantly, it is clear that there are high expectations for data mining, or factual data analysis, being an effective tool. Data mining is not a new technology but its use is growing significantly in both the private and public sectors. Industries such as banking, insurance, medicine, and retailing commonly use data mining to reduce costs, enhance research, and increase sales. In the public sector, data mining applications initially were used as a means to detect fraud and waste, but have grown to also be used for purposes such as measuring and improving program performance. While not completely without controversy, these types of data mining applications have gained greater acceptance. However, some national defense/homeland security data mining applications represent a significant expansion in the quantity and scope of data to be analyzed. Moreover, due to their security-related nature, the details of these initiatives (e.g., data sources, analytical techniques, access and retention practices, etc.) are usually less transparent.


2017 ◽  
Vol 98 (9) ◽  
pp. 1879-1896 ◽  
Author(s):  
Zengchao Hao ◽  
Xing Yuan ◽  
Youlong Xia ◽  
Fanghua Hao ◽  
Vijay P. Singh

Abstract In past decades, severe drought events have struck different regions around the world, leading to huge losses to a wide array of environmental and societal sectors. Because of wide impacts of drought, it is of critical importance to monitor drought in near–real time and provide early warning. This article provides an overview of the development of drought monitoring and prediction systems (DMAPS) at regional and global scales. After introducing drought indicators, drought monitoring (based on different data sources and tools) is summarized, along with an introduction of statistical and dynamical drought prediction approaches. The current progress of the development and implementation of DMAPS with various indicators at different temporal and/or spatial resolutions, based on the land surface modeling, remote sensing, and seasonal climate forecast, at the regional and global scales is then reviewed. Advances in drought monitoring with multiple data sources and tools and prediction from multimodel ensembles are highlighted. Also highlighted are challenges and opportunities, including near-real-time and long-term data products, indicator linkage to impacts, prediction skill improvement, and information dissemination/communication. The review of different components of these systems will provide useful guidelines and insights for the future development of effective DMAPS to aid drought modeling and management.


2021 ◽  
Author(s):  
Daniel Cardoso Braga ◽  
Mohammadreza Kamyab ◽  
Brian Harclerode ◽  
Deep Joshi

Abstract During drilling, surveys to determine the wellbore trajectory are performed at every drilling connection. However, due to the offset between the survey instrument and the bit (typically between 30-100 ft), this survey represents the sensor's position which is lagged compared to the bit. This paper describes a method to automatically calculate projections to the bit in real-time utilizing multiple data sources: WITSML stream, BHA components and rotary trend analysis while rotary drilling. The projection to the bit calculation routine is performed in real time every 30 seconds. This paper presents results of projections for four horizontal unconventional wells drilled in West Texas. Nearly 75,000 projections were generated on the four wells, validated with 839 survey stations, with median divergence of the projections from the nearest survey stations being less than one foot.


2005 ◽  
Vol 31 (5) ◽  
pp. 649-662
Author(s):  
Virginia Valentine

Continuous glucose monitoring (CGM) is a new technology that is poised to dramatically alter the practice of managing diabetes in the near future. To understand the potential utility of CGM in clinical practice, the goal for monitoring glucose must be redefined. Is obtaining a mere snapshot of the current blood glucose level a satisfactory goal? This differs substantially from a more comprehensive assessment of the patient’s current (and immediate future) glycemic status that could be gained through real-time continuous monitoring. The diabetes educator will play a critical role in introducing and implementing the next generation of real-time CGM systems.


Author(s):  
Yuandong Liu ◽  
Zhihua Zhang ◽  
Lee D. Han ◽  
Candace Brakewood

Traffic queues, especially queues caused by non-recurrent events such as incidents, are unexpected to high-speed drivers approaching the end of queue (EOQ) and become safety concerns. Though the topic has been extensively studied, the identification of EOQ has been limited by the spatial-temporal resolution of traditional data sources. This study explores the potential of location-based crowdsourced data, specifically Waze user reports. It presents a dynamic clustering algorithm that can group the location-based reports in real time and identify the spatial-temporal extent of congestion as well as the EOQ. The algorithm is a spatial-temporal extension of the density-based spatial clustering of applications with noise (DBSCAN) algorithm for real-time streaming data with an adaptive threshold selection procedure. The proposed method was tested with 34 traffic congestion cases in the Knoxville,Tennessee area of the United States. It is demonstrated that the algorithm can effectively detect spatial-temporal extent of congestion based on Waze report clusters and identify EOQ in real-time. The Waze report-based detection are compared to the detection based on roadside sensor data. The results are promising: The EOQ identification time of Waze is similar to the EOQ detection time of traffic sensor data, with only 1.1 min difference on average. In addition, Waze generates 1.9 EOQ detection points every mile, compared to 1.8 detection points generated by traffic sensor data, suggesting the two data sources are comparable in respect of reporting frequency. The results indicate that Waze is a valuable complementary source for EOQ detection where no traffic sensors are installed.


Author(s):  
Rema Nilakanta ◽  
Laura Zurita ◽  
Olatz López Fernandez ◽  
Elsebeth Korsgaard Sorensen ◽  
Eugene S. Takle

This chapter presents a preliminary critique of an online transatlantic collaboration designed for collaborative learning. The critique by external reviewers using qualitative methods within the interpretivist paradigm hints at critical factors necessary for successful online collaborative learning. The evaluation seems to support the view that in order to raise the quality of online dialogue and enhance deep learning, it is good practice to heed, as well as give voice to participants’ needs by involving them directly in the design of the course. This has the potential to enhance student motivation and learning. The authors plan to continue their work, and present a more grounded and detailed evaluation in the near future involving multiple data sources, comprehensive surveys, and document analysis.


2020 ◽  
Vol 21 (4) ◽  
pp. 611-623
Author(s):  
Manjunatha S ◽  
Annappa B

Advancement in Information Communication Technology (ICT) and the Internet of Things (IoT) has to lead tothe continuous generation of a large amount of data. Smart city projects are being implemented in various parts of the world where analysis of public data helps in providing a better quality of life. Data analytics plays a vital role in many such data-driven applications. Real-time analytics for finding valuable insights at the right time using smart city data is crucial in making appropriate decisions for city administration. It is essential to use multiple data sources as input for the analysis to achieve better and more accurate data-driven solutions. It helps in finding more accurate solutions and making appropriate decisions. Public safety is one of the major concerns in any smart city project in which real-time analytics is much useful in the early detection of valuable data patterns. It is crucial to find early predictions of crime-related incidents and generating emergency alerts for making appropriate decisions to provide security to the people and safety of the city infrastructure. This paper discusses the proposed real-time big data analytics framework with data blending approach using multiple data sources for smart city applications. Analytics using multiple data sources for a specific data-driven solution helps in finding more data patterns, which in turn increases the accuracy of analytics results. The data preprocessing phase is a challenging task in data analytics when data being ingested continuously in real-time into the analytics system. The proposed system helps in the preprocessing of real-time data with data blending of multiple data sources used in the analytics. The proposed framework is beneficial when data from multiple sources are ingested in real-time as input data and is also flexible to use any additional data source of interest. The experimental work carried out with the proposed framework using multiple data sources to find the crime-related insights in real-time helps the public safety solutions in the smart city. The experimental outcome shows that there is a significant increase in the number of identified useful data patterns as the number of data sources increases. A real-time based emergency alert system to help the public safety solution is implementedusing a machine learning-based classification algorithm with the proposed framework. The experiment is carried out with different classification algorithms, and the results show that Naive Bayes classification  performs better in generating emergency alerts.


2019 ◽  
Vol 2019 ◽  
pp. 1-15 ◽  
Author(s):  
Mahdie Hasani ◽  
Arash Jahangiri ◽  
Ipek Nese Sener ◽  
Sirajum Munira ◽  
Justin M. Owens ◽  
...  

Over the last decade, demand for active transportation modes such as walking and bicycling has increased. While it is desirable to provide high levels of safety for these eco-friendly modes of travel, unfortunately, the overall percentage of pedestrian and bicycle fatalities increased from 13% to 18% of total road-related fatalities in the last decade. In San Diego County, although the total number of pedestrian and bicyclist fatalities decreased over the same period of time, a similar trend with a more drastic change is observed; the overall percentage of pedestrian and bicycle fatalities increased from 19.5% to 31.8%. This study aims to estimate pedestrian and bicyclist exposure and identify signalized intersections with highest risk for walking and bicycling within the city of San Diego, California, USA. Multiple data sources such as automated pedestrian and bicycle counters, video cameras, and crash data were utilized. Data mining techniques, a new sampling strategy, and automated video processing methods were adopted to demonstrate a holistic approach that can be applied to identify facilities with highest need of improvement. Cluster analysis coupled with stratification was employed to select a representative sample of intersections for data collection. Automated pedestrian and bicycle counting models utilized in this study reached a high accuracy, provided certain conditions exist in video data. Results from exposure modeling showed that pedestrian and bicyclist volume was characterized by transportation network, population, traffic generators, and land use variables. There were both similarities and differences between pedestrian and bicycle models, including different spatial scales of influence by mode. Additionally, the study quantified risk incorporating injury severity levels, frequency of victims, distance crossed, and exposure into a single equation. It was found that not all intersections with the highest number of pedestrian and bicyclist victims were identified as high-risk after exposure and other factors such as crash severity were taken into account.


Sign in / Sign up

Export Citation Format

Share Document