Personalized nudging

2020 ◽  
pp. 1-10
Author(s):  
STUART MILLS

Abstract A criticism of behavioural nudges is that they lack precision, sometimes nudging people who – had their personal circumstances been known – would have benefitted from being nudged differently. This problem may be solved through a programme of personalized nudging. This paper proposes a two-component framework for personalization that suggests choice architects can personalize both the choices being nudged towards (choice personalization) and the method of nudging itself (delivery personalization). To do so, choice architects will require access to heterogeneous data. This paper argues that such data need not take the form of big data, but agrees with previous authors that the opportunities to personalize nudges increase as data become more accessible. Finally, this paper considers two challenges that a personalized nudging programme must consider, namely the risk personalization poses to the universality of laws, regulation and social experiences, and the data access challenges policy-makers may encounter.

2020 ◽  
Vol 46 (1) ◽  
pp. 55-75
Author(s):  
Ying Long ◽  
Jianting Zhao

This paper examines how mass ridership data can help describe cities from the bikers' perspective. We explore the possibility of using the data to reveal general bikeability patterns in 202 major Chinese cities. This process is conducted by constructing a bikeability rating system, the Mobike Riding Index (MRI), to measure bikeability in terms of usage frequency and the built environment. We first investigated mass ridership data and relevant supporting data; we then established the MRI framework and calculated MRI scores accordingly. This study finds that people tend to ride shared bikes at speeds close to 10 km/h for an average distance of 2 km roughly three times a day. The MRI results show that at the street level, the weekday and weekend MRI distributions are analogous, with an average score of 49.8 (range 0–100). At the township level, high-scoring townships are those close to the city centre; at the city level, the MRI is unevenly distributed, with high-MRI cities along the southern coastline or in the middle inland area. These patterns have policy implications for urban planners and policy-makers. This is the first and largest-scale study to incorporate mobile bike-share data into bikeability measurements, thus laying the groundwork for further research.


2021 ◽  
pp. 0734242X2098082
Author(s):  
Md. Sazzadul Haque ◽  
Shafkat Sharif ◽  
Aseer Masnoon ◽  
Ebne Rashid

The SARS-CoV-2 pandemic has demonstrated both positive and negative effects on the environment. Major concerns over personal hygiene, mandated and ease in lockdown actions and slackening of some policy measures have led to a massive surge in the use of disposable personal protective equipment (PPE) and other single-use plastic items. This generated an enormous amount of plastic waste from both healthcare and household units, and will continue to do so for the foreseeable future. Apart from the healthcare workers, the general public have become accustomed to using PPE. These habits are threatening the land and marine environment with immense loads of plastic waste, due to improper disposal practices across the world, especially in developing nations. Contaminated PPE has already made its way to the oceans which will inevitably produce plastic particles alongside other pathogen-driven diseases. This study provided an estimation-based approach in quantifying the amount of contaminated plastic waste that can be expected daily from the massive usage of PPE (e.g. facemasks) because of the countrywide mandated regulations on PPE usage. The situation of Bangladesh has been analysed and projections revealed that a total of 3.4 billion pieces of single-use facemask, hand sanitizer bottles, hand gloves and disposable polyethylene bags will be produced monthly, which will give rise to 472.30 t of disposable plastic waste per day. The equations provided for the quantification of waste from used single-use plastic and PPE can be used for other countries for rough estimations. Then, the discussed recommendations will help concerned authorities and policy makers to design effective response plans. Sustainable plastic waste management for the current and post-pandemic period can be imagined and acted upon.


2020 ◽  
Vol 12 (14) ◽  
pp. 5595 ◽  
Author(s):  
Ana Lavalle ◽  
Miguel A. Teruel ◽  
Alejandro Maté ◽  
Juan Trujillo

Fostering sustainability is paramount for Smart Cities development. Lately, Smart Cities are benefiting from the rising of Big Data coming from IoT devices, leading to improvements on monitoring and prevention. However, monitoring and prevention processes require visualization techniques as a key component. Indeed, in order to prevent possible hazards (such as fires, leaks, etc.) and optimize their resources, Smart Cities require adequate visualizations that provide insights to decision makers. Nevertheless, visualization of Big Data has always been a challenging issue, especially when such data are originated in real-time. This problem becomes even bigger in Smart City environments since we have to deal with many different groups of users and multiple heterogeneous data sources. Without a proper visualization methodology, complex dashboards including data from different nature are difficult to understand. In order to tackle this issue, we propose a methodology based on visualization techniques for Big Data, aimed at improving the evidence-gathering process by assisting users in the decision making in the context of Smart Cities. Moreover, in order to assess the impact of our proposal, a case study based on service calls for a fire department is presented. In this sense, our findings will be applied to data coming from citizen calls. Thus, the results of this work will contribute to the optimization of resources, namely fire extinguishing battalions, helping to improve their effectiveness and, as a result, the sustainability of a Smart City, operating better with less resources. Finally, in order to evaluate the impact of our proposal, we have performed an experiment, with non-expert users in data visualization.


2021 ◽  
pp. 074391562199967
Author(s):  
Raffaello Rossi ◽  
Agnes Nairn ◽  
Josh Smith ◽  
Christopher Inskip

The internet raises substantial challenges for policy makers in regulating gambling harm. The proliferation of gambling advertising on Twitter is one such challenge. However, the sheer scale renders it extremely hard to investigate using conventional techniques. In this paper the authors present three UK Twitter gambling advertising studies using both Big Data analytics and manual content analysis to explore the volume and content of gambling adverts, the age and engagement of followers, and compliance with UK advertising regulations. They analyse 890k organic adverts from 417 accounts along with data on 620k followers and 457k engagements (replies and retweets). They find that around 41,000 UK children follow Twitter gambling accounts, and that two-thirds of gambling advertising Tweets fail to fully comply with regulations. Adverts for eSports gambling are markedly different from those for traditional gambling (e.g. on soccer, casinos and lotteries) and appear to have strong appeal for children, with 28% of engagements with eSports gambling ads from under 16s. The authors make six policy recommendations: spotlight eSports gambling advertising; create new social-media-specific regulations; revise regulation on content appealing to children; use technology to block under-18s from seeing gambling ads; require ad-labelling of organic gambling Tweets; and deploy better enforcement.


2019 ◽  
Vol 6 (1) ◽  
Author(s):  
Mahdi Torabzadehkashi ◽  
Siavash Rezaei ◽  
Ali HeydariGorji ◽  
Hosein Bobarshad ◽  
Vladimir Alves ◽  
...  

AbstractIn the era of big data applications, the demand for more sophisticated data centers and high-performance data processing mechanisms is increasing drastically. Data are originally stored in storage systems. To process data, application servers need to fetch them from storage devices, which imposes the cost of moving data to the system. This cost has a direct relation with the distance of processing engines from the data. This is the key motivation for the emergence of distributed processing platforms such as Hadoop, which move process closer to data. Computational storage devices (CSDs) push the “move process to data” paradigm to its ultimate boundaries by deploying embedded processing engines inside storage devices to process data. In this paper, we introduce Catalina, an efficient and flexible computational storage platform, that provides a seamless environment to process data in-place. Catalina is the first CSD equipped with a dedicated application processor running a full-fledged operating system that provides filesystem-level data access for the applications. Thus, a vast spectrum of applications can be ported for running on Catalina CSDs. Due to these unique features, to the best of our knowledge, Catalina CSD is the only in-storage processing platform that can be seamlessly deployed in clusters to run distributed applications such as Hadoop MapReduce and HPC applications in-place without any modifications on the underlying distributed processing framework. For the proof of concept, we build a fully functional Catalina prototype and a CSD-equipped platform using 16 Catalina CSDs to run Intel HiBench Hadoop and HPC benchmarks to investigate the benefits of deploying Catalina CSDs in the distributed processing environments. The experimental results show up to 2.2× improvement in performance and 4.3× reduction in energy consumption, respectively, for running Hadoop MapReduce benchmarks. Additionally, thanks to the Neon SIMD engines, the performance and energy efficiency of DFT algorithms are improved up to 5.4× and 8.9×, respectively.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Ikbal Taleb ◽  
Mohamed Adel Serhani ◽  
Chafik Bouhaddioui ◽  
Rachida Dssouli

AbstractBig Data is an essential research area for governments, institutions, and private agencies to support their analytics decisions. Big Data refers to all about data, how it is collected, processed, and analyzed to generate value-added data-driven insights and decisions. Degradation in Data Quality may result in unpredictable consequences. In this case, confidence and worthiness in the data and its source are lost. In the Big Data context, data characteristics, such as volume, multi-heterogeneous data sources, and fast data generation, increase the risk of quality degradation and require efficient mechanisms to check data worthiness. However, ensuring Big Data Quality (BDQ) is a very costly and time-consuming process, since excessive computing resources are required. Maintaining Quality through the Big Data lifecycle requires quality profiling and verification before its processing decision. A BDQ Management Framework for enhancing the pre-processing activities while strengthening data control is proposed. The proposed framework uses a new concept called Big Data Quality Profile. This concept captures quality outline, requirements, attributes, dimensions, scores, and rules. Using Big Data profiling and sampling components of the framework, a faster and efficient data quality estimation is initiated before and after an intermediate pre-processing phase. The exploratory profiling component of the framework plays an initial role in quality profiling; it uses a set of predefined quality metrics to evaluate important data quality dimensions. It generates quality rules by applying various pre-processing activities and their related functions. These rules mainly aim at the Data Quality Profile and result in quality scores for the selected quality attributes. The framework implementation and dataflow management across various quality management processes have been discussed, further some ongoing work on framework evaluation and deployment to support quality evaluation decisions conclude the paper.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Yusheng Lu ◽  
Jiantong Zhang

PurposeThe digital revolution and the use of big data (BD) in particular has important applications in the construction industry. In construction, massive amounts of heterogeneous data need to be analyzed to improve onsite efficiency. This article presents a systematic review and identifies future research directions, presenting valuable conclusions derived from rigorous bibliometric tools. The results of this study may provide guidelines for construction engineering and global policymaking to change the current low-efficiency of construction sites.Design/methodology/approachThis study identifies research trends from 1,253 peer-reviewed papers, using general statistics, keyword co-occurrence analysis, critical review, and qualitative-bibliometric techniques in two rounds of search.FindingsThe number of studies in this area rapidly increased from 2012 to 2020. A significant number of publications originated in the UK, China, the US, and Australia, and the smallest number from one of these countries is more than twice the largest number in the remaining countries. Keyword co-occurrence is divided into three clusters: BD application scenarios, emerging technology in BD, and BD management. Currently developing approaches in BD analytics include machine learning, data mining, and heuristic-optimization algorithms such as graph convolutional, recurrent neural networks and natural language processes (NLP). Studies have focused on safety management, energy reduction, and cost prediction. Blockchain integrated with BD is a promising means of managing construction contracts.Research limitations/implicationsThe study of BD is in a stage of rapid development, and this bibliometric analysis is only a part of the necessary practical analysis.Practical implicationsNational policies, temporal and spatial distribution, BD flow are interpreted, and the results of this may provide guidelines for policymakers. Overall, this work may develop the body of knowledge, producing a reference point and identifying future development.Originality/valueTo our knowledge, this is the first bibliometric review of BD in the construction industry. This study can also benefit construction practitioners by providing them a focused perspective of BD for emerging practices in the construction industry.


2017 ◽  
Vol 2 (Suppl. 1) ◽  
pp. 1-8
Author(s):  
Denis Horgan ◽  
Walter Ricciardi

In the world of modern health, despite the fact that we've been blessed with amazing advances of late - the advent of personalised medicine is just one example - “change” for most citizens seems slow. There are clear discrepancies in availability of the best care for all, the divisions in access from country to country, wealthy to poor, are large. There are even discrepancies between regions of the larger countries, where access often varies alarmingly. Too many Member States (with their competence for healthcare) appear to be clinging stubbornly to the concept of “one-size-fits-all” in healthcare and often stifle advances possible through personalised medicine. Meanwhile, the legislative arena encompassing health has grown big and unwieldy in many respects. And bigger is not always better. The health advances spoken of above, an increased knowledge on the part of patients, the emergence of Big Data and more, are quickly changing the face of healthcare in Europe. But healthcare thinking across the EU isn't changing fast enough. The new technologies will certainly speak for themselves, but only if allowed to do so. Acknowledging that, this article highlights a positive reform agenda, while explaining that new avenues need to be explored.


1999 ◽  
Vol 33 (3) ◽  
pp. 55-66 ◽  
Author(s):  
L. Charles Sun

An interactive data access and retrieval system, developed at the U.S. National Oceanographic Data Genter (NODG) and available at <ext-link ext-link-type="uri" href="http://www.node.noaa.gov">http://www.node.noaa.gov</ext-link>, is presented in this paper. The purposes of this paper are: (1) to illustrate the procedures of quality control and loading oceanographic data into the NODG ocean databases and (2) to describe the development of a system to manage, visualize, and disseminate the NODG data holdings over the Internet. The objective of the system is to provide ease of access to data that will be required for data assimilation models. With advances in scientific understanding of the ocean dynamics, data assimilation models require the synthesis of data from a variety of resources. Modern intelligent data systems usually involve integrating distributed heterogeneous data and information sources. As the repository for oceanographic data, NOAA’s National Oceanographic Data Genter (NODG) is in a unique position to develop such a data system. In support of the data assimilation needs, NODG has developed a system to facilitate browsing of the oceanographic environmental data and information that is available on-line at NODG. Users may select oceanographic data based on geographic areas, time periods and measured parameters. Once the selection is complete, users may produce a station location plot, produce plots of the parameters or retrieve the data.


Sign in / Sign up

Export Citation Format

Share Document