scholarly journals Satellite IoT Edge Intelligent Computing: A Research on Architecture

Electronics ◽  
2019 ◽  
Vol 8 (11) ◽  
pp. 1247 ◽  
Author(s):  
Junyong Wei ◽  
Jiarong Han ◽  
Suzhi Cao

As the number of satellites continues to increase, satellites become an important part of the IoT and 5G/6G communications. How to deal with the data of the satellite Internet of Things is a problem worth considering and paying attention to. Due to the current on-board processing capability and the limitation of the inter-satellite communication rate, the data acquisition from the satellite has a higher delay and the data utilization rate is lower. In order to use the data generated by the satellite IoT more effectively, we propose a satellite IoT edge intelligent computing architecture. In the article, we analyze the current methods of satellite data processing, combined with the development trend of future satellites, and use the characteristics of edge computing and machine learning to describe the satellite IoT edge intelligent computing architecture. Finally, we verify that the architecture can speed up the processing of satellite data. By demonstrating the performance of different neural network models in the satellite edge intelligent computing architecture, we can find that the lightweight of neural networks can promote the development of satellite IoT edge intelligent computing architecture.

2014 ◽  
Vol 602-605 ◽  
pp. 2619-2622
Author(s):  
Guang Hui Cai ◽  
Shu Wen Zheng ◽  
Ping Li ◽  
Chuan Liang

This paper introduces the basic principle of RS codes in OFDM system and the design method of 16 shift registers, finite field multiplicators and adders, on and timing circuit are validated by using of EDA tools Quartus II. Finally, the simulation results is given and demonstrates that a good channel encoding and decoding can be achieved by using FPGA in OFDM system, which can save the cost, shorten the designing cycle , and is convenient to speed up the listing of products, occupy less hardware resources, and comply with the development trend of modern communication.


Entropy ◽  
2020 ◽  
Vol 22 (11) ◽  
pp. 1190
Author(s):  
Yong Luo ◽  
Liancheng Yin ◽  
Wenchao Bai ◽  
Keming Mao

As a special case of machine learning, incremental learning can acquire useful knowledge from incoming data continuously while it does not need to access the original data. It is expected to have the ability of memorization and it is regarded as one of the ultimate goals of artificial intelligence technology. However, incremental learning remains a long term challenge. Modern deep neural network models achieve outstanding performance on stationary data distributions with batch training. This restriction leads to catastrophic forgetting for incremental learning scenarios since the distribution of incoming data is unknown and has a highly different probability from the old data. Therefore, a model must be both plastic to acquire new knowledge and stable to consolidate existing knowledge. This review aims to draw a systematic review of the state of the art of incremental learning methods. Published reports are selected from Web of Science, IEEEXplore, and DBLP databases up to May 2020. Each paper is reviewed according to the types: architectural strategy, regularization strategy and rehearsal and pseudo-rehearsal strategy. We compare and discuss different methods. Moreover, the development trend and research focus are given. It is concluded that incremental learning is still a hot research area and will be for a long period. More attention should be paid to the exploration of both biological systems and computational models.


Author(s):  
Vamsi Krishna Mekala

Diabetic retinopathy are among the most common causes of vision loss in today's world. Visual impairment impacts about one in 3 diabetics, according to an epidemiological research. Diagnostic imaging is an important aspect of medical photography in contemporary world. Deep learning improves the eyesight for identifying illness in radiography. The goal is to use machine learning to diagnose vision loss. Deep learning in diagnostic devices might improve and speed up the diagnosis of sugar-related vision loss. This research will look at neural network models, algorithms, and simulations in order to diagnose diabetic retinopathy rapidly and help the medical system. The classifier is constructed using CNN.


2013 ◽  
Vol 694-697 ◽  
pp. 2471-2475
Author(s):  
Ming Yu Jia ◽  
Qiu Hong Gao

In order to speed up the integration of industrialization and informatization in China, to promote the transformation and upgrading of enterprises. This article introduces the project of manufacturing informatization based on internet of things (IOT), analyzes the development of IOT technology, expounds the application of IOT in informatization development, and points out the development trend of manufacturing informatization in China based on IOT.


Author(s):  
Emanuele La Malfa ◽  
Rhiannon Michelmore ◽  
Agnieszka M. Zbrzezny ◽  
Nicola Paoletti ◽  
Marta Kwiatkowska

We build on abduction-based explanations for machine learning and develop a method for computing local explanations for neural network models in natural language processing (NLP). Our explanations comprise a subset of the words of the input text that satisfies two key features: optimality w.r.t. a user-defined cost function, such as the length of explanation, and robustness, in that they ensure prediction invariance for any bounded perturbation in the embedding space of the left-out words. We present two solution algorithms, respectively based on implicit hitting sets and maximum universal subsets, introducing a number of algorithmic improvements to speed up convergence of hard instances. We show how our method can be configured with different perturbation sets in the embedded space and used to detect bias in predictions by enforcing include/exclude constraints on biased terms, as well as to enhance existing heuristic-based NLP explanation frameworks such as Anchors. We evaluate our framework on three widely used sentiment analysis tasks and texts of up to 100 words from SST, Twitter and IMDB datasets, demonstrating the effectiveness of the derived explanations.


2015 ◽  
Vol 743 ◽  
pp. 399-402
Author(s):  
Shu Wen Zheng ◽  
Guang Hui Cai ◽  
H.W. Wang ◽  
G.Y. Zhang

This paper introduces the basic principle of Viterbi code in OFDM system and a new implementation method based on FPGA, on and timing circuit are validated by using of EDA tools Quartus II. The new method is that the Viterbi decoding module is improved,which makes the design of the whole decoding structure can be improved and solves the compatibility problem among the modules. Finally, the simulation results is given and demonstrates that a good Viterbi code can be achieved by using FPGA in OFDM system, which can save the cost, shorten the designing cycle , and is convenient to speed up the listing of products, occupy less hardware resources, and comply with the development trend of modern communication.


Author(s):  
Yan Chen

In the development of modern enterprises, the management mode is the main factor that determines the height of its development, so it is highly valued.The traditional management mode has many disadvantages, which can not comply with the development trend of the current market economy, which leads to the slow improvement of enterprise management level, which is not conducive to the operation and development of enterprises. The proposal of the concept of value-oriented enterprise management has become a new mode of enterprise management, which is helpful to speed up the reform process of enterprises and enhance their development vitality. Therefore, we should clarify the key points of the construction of the model and gradually improve the level of management. This paper analyzes the concept of value enterprise management, puts forward the application status and classification of value enterprise management model, and explores the construction strategy of value enterprise management model.


2020 ◽  
Vol 10 (18) ◽  
pp. 6151
Author(s):  
Sheng-Chieh Hung ◽  
Hui-Ching Wu ◽  
Ming-Hseng Tseng

Classification is needed in disaster investigation, traffic control, and land-use resource management. How to quickly and accurately classify such remote sensing imagery has become a popular research topic. However, the application of large, deep neural network models for the training of classifiers in the hope of obtaining good classification results is often very time-consuming. In this study, a new CNN (convolutional neutral networks) architecture, i.e., RSSCNet (remote sensing scene classification network), with high generalization capability was designed. Moreover, a two-stage cyclical learning rate policy and the no-freezing transfer learning method were developed to speed up model training and enhance accuracy. In addition, the manifold learning t-SNE (t-distributed stochastic neighbor embedding) algorithm was used to verify the effectiveness of the proposed model, and the LIME (local interpretable model, agnostic explanation) algorithm was applied to improve the results in cases where the model made wrong predictions. Comparing the results of three publicly available datasets in this study with those obtained in previous studies, the experimental results show that the model and method proposed in this paper can achieve better scene classification more quickly and more efficiently.


Sign in / Sign up

Export Citation Format

Share Document