scholarly journals Long-range forecasting properties of state-of-the-art models of demand for electric energy. Volume I. Final report

1976 ◽  
Author(s):  
Marco Calderoni ◽  
Jochen Doel ◽  
Korbinian Kramer ◽  
Stephen White ◽  
Daniel Mugnier ◽  
...  

Author(s):  
Taylor Valore

Upon relocation to a new, state-of-the-art, 260-acre campus outside of Cairo, Egypt, the American University in Cairo (AUC) sought to revamp its annual planning and budgeting processes to address several deficiencies. Primarily, long-range planning and annual budgeting were two independent events with little synchronization. This case study will detail the process and technical aspects of AUC’s transition to a centralized and synchronized planning and budgeting cycle focused on determining appropriate workflows and leveraging database technologies to track planning initiatives throughout an approvals process. Readers will be able to weigh the drawbacks of centralization against the benefits of standardized budget review and planning.


Author(s):  
Jian Guan ◽  
Fei Huang ◽  
Zhihao Zhao ◽  
Xiaoyan Zhu ◽  
Minlie Huang

Story generation, namely, generating a reasonable story from a leading context, is an important but challenging task. In spite of the success in modeling fluency and local coherence, existing neural language generation models (e.g., GPT-2) still suffer from repetition, logic conflicts, and lack of long-range coherence in generated stories. We conjecture that this is because of the difficulty of associating relevant commonsense knowledge, understanding the causal relationships, and planning entities and events with proper temporal order. In this paper, we devise a knowledge-enhanced pretraining model for commonsense story generation. We propose to utilize commonsense knowledge from external knowledge bases to generate reasonable stories. To further capture the causal and temporal dependencies between the sentences in a reasonable story, we use multi-task learning, which combines a discriminative objective to distinguish true and fake stories during fine-tuning. Automatic and manual evaluation shows that our model can generate more reasonable stories than state-of-the-art baselines, particularly in terms of logic and global coherence.


2007 ◽  
Vol 33 (3) ◽  
pp. 355-396 ◽  
Author(s):  
Julia Hockenmaier ◽  
Mark Steedman

This article presents an algorithm for translating the Penn Treebank into a corpus of Combinatory Categorial Grammar (CCG) derivations augmented with local and long-range word-word dependencies. The resulting corpus, CCGbank, includes 99.4% of the sentences in the Penn Treebank. It is available from the Linguistic Data Consortium, and has been used to train wide-coverage statistical parsers that obtain state-of-the-art rates of dependency recovery. In order to obtain linguistically adequate CCG analyses, and to eliminate noise and inconsistencies in the original annotation, an extensive analysis of the constructions and annotations in the Penn Treebank was called for, and a substantial number of changes to the Treebank were necessary. We discuss the implications of our findings for the extraction of other linguistically expressive grammars from the Treebank, and for the design of future treebanks.


2020 ◽  
Vol 34 (07) ◽  
pp. 11531-11538
Author(s):  
Zhihui Lin ◽  
Maomao Li ◽  
Zhuobin Zheng ◽  
Yangyang Cheng ◽  
Chun Yuan

Spatiotemporal prediction is challenging due to the complex dynamic motion and appearance changes. Existing work concentrates on embedding additional cells into the standard ConvLSTM to memorize spatial appearances during the prediction. These models always rely on the convolution layers to capture the spatial dependence, which are local and inefficient. However, long-range spatial dependencies are significant for spatial applications. To extract spatial features with both global and local dependencies, we introduce the self-attention mechanism into ConvLSTM. Specifically, a novel self-attention memory (SAM) is proposed to memorize features with long-range dependencies in terms of spatial and temporal domains. Based on the self-attention, SAM can produce features by aggregating features across all positions of both the input itself and memory features with pair-wise similarity scores. Moreover, the additional memory is updated by a gating mechanism on aggregated features and an established highway with the memory of the previous time step. Therefore, through SAM, we can extract features with long-range spatiotemporal dependencies. Furthermore, we embed the SAM into a standard ConvLSTM to construct a self-attention ConvLSTM (SA-ConvLSTM) for the spatiotemporal prediction. In experiments, we apply the SA-ConvLSTM to perform frame prediction on the MovingMNIST and KTH datasets and traffic flow prediction on the TexiBJ dataset. Our SA-ConvLSTM achieves state-of-the-art results on both datasets with fewer parameters and higher time efficiency than previous state-of-the-art method.


Sign in / Sign up

Export Citation Format

Share Document