Attack target sorting technology for beyond visual range air combat based on grey incidence decision-making method

Author(s):  
Mou Chen ◽  
Qing-yuan Zou ◽  
Chang-sheng Jiang
IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 11624-11634
Author(s):  
Yingying Ma ◽  
Guoqiang Wang ◽  
Xiaoxuan Hu ◽  
He Luo ◽  
Xing Lei

Author(s):  
HAIPENG KONG ◽  
NI LI

In order to achieve the optimal attack outcome in the air combat under the beyond visual range (BVR) condition, the decision-making (DM) problem which is to set a proper assignment for the friendly fighters on the hostile fighters is the most crucial task for cooperative multiple target attack (CMTA). In this paper, a heuristic quantum genetic algorithm (HQGA) is proposed to solve the DM problem. The originality of our work can be supported in the following aspects: (1) the HQGA assigns all hostile fighters to every missile rather than fighters so that the HQGA can encode chromosomes with quantum bits (Q-bits); (2) the relative successful sequence probability (RSSP) is defined, based on which the priority attack vector is constructed; (3) the HQGA can heuristically modify quantum chromosomes according to modification technique proposed in this paper; (4) last but not the least, in some special conditions, the HQGA gets rid of the constraint described by other algorithms that to obtain a better result. In the end of this paper, two examples are illustrated to show that the HQGA has its own advantage over other algorithms when dealing with the DM problem in the context of CMTA.


2021 ◽  
Vol 1 (1) ◽  
Author(s):  
Luhe Wang ◽  
Jinwen Hu ◽  
Zhao Xu ◽  
Chunhui Zhao

AbstractUnmanned aerial vehicles (UAVs) have been found significantly important in the air combats, where intelligent and swarms of UAVs will be able to tackle with the tasks of high complexity and dynamics. The key to empower the UAVs with such capability is the autonomous maneuver decision making. In this paper, an autonomous maneuver strategy of UAV swarms in beyond visual range air combat based on reinforcement learning is proposed. First, based on the process of air combat and the constraints of the swarm, the motion model of UAV and the multi-to-one air combat model are established. Second, a two-stage maneuver strategy based on air combat principles is designed which include inter-vehicle collaboration and target-vehicle confrontation. Then, a swarm air combat algorithm based on deep deterministic policy gradient strategy (DDPG) is proposed for online strategy training. Finally, the effectiveness of the proposed algorithm is validated by multi-scene simulations. The results show that the algorithm is suitable for UAV swarms of different scales.


2021 ◽  
Vol 2021 ◽  
pp. 1-19
Author(s):  
Xie Lei ◽  
Ding Dali ◽  
Wei Zhenglei ◽  
Xi Zhifei ◽  
Tang Andi

To improve the accuracy and real-time performance of autonomous decision-making by the unmanned combat aerial vehicle (UCAV), a decision-making method combining the dynamic relational weight algorithm and moving time strategy is proposed, and trajectory prediction is added to maneuver decision-making. Considering the lack of continuity and diversity of air combat situation reflected by the constant weight in situation assessment, a dynamic relational weight algorithm is proposed to establish an air combat situation system and adjust the weight according to the current situation. Based on the dominance function, this method calculates the correlation degree of each subsituation and the total situation. According to the priority principle and information entropy theory, the hierarchical fitting function is proposed, the association expectation is calculated by using if-then rules, and the weight is dynamically adjusted. In trajectory prediction, the online sliding input module is introduced, and the long- and short-term memory (LSTM) network is used for real-time prediction. To further improve the prediction accuracy, the adaptive boosting (Ada) method is used to build the outer frame and compare with three traditional prediction networks. The results show that the prediction accuracy of Ada-LSTM is better. In the decision-making method, the moving time optimization strategy is adopted. To solve the problem of timeliness and optimization, each control variable is divided into 9 gradients, and there are 729 control schemes in the control sequence. Through contrast pursuit simulation experiments, it is verified that the maneuver decision method combining the dynamic relational weight algorithm and moving time strategy has a better accuracy and real-time performance. In the case of using prediction and not using prediction, the adaptive countermeasure simulation is carried out with the current more advanced Bayesian inference maneuvering decision-making scheme. The results show that the UCAV maneuvering decision-making ability combined with accurate prediction is better.


2021 ◽  
pp. 4881-4891
Author(s):  
Yue Li ◽  
Wei Han ◽  
Weiguo Zhong ◽  
Jiazheng Ji ◽  
Wanhui Mu

Electronics ◽  
2018 ◽  
Vol 7 (11) ◽  
pp. 279 ◽  
Author(s):  
Xianbing Zhang ◽  
Guoqing Liu ◽  
Chaojie Yang ◽  
Jiang Wu

With the development of information technology, the degree of intelligence in air combat is increasing, and the demand for automated intelligent decision-making systems is becoming more intense. Based on the characteristics of over-the-horizon air combat, this paper constructs a super-horizon air combat training environment, which includes aircraft model modeling, air combat scene design, enemy aircraft strategy design, and reward and punishment signal design. In order to improve the efficiency of the reinforcement learning algorithm for the exploration of strategy space, this paper proposes a heuristic Q-Network method that integrates expert experience, and uses expert experience as a heuristic signal to guide the search process. At the same time, heuristic exploration and random exploration are combined. Aiming at the over-the-horizon air combat maneuver decision problem, the heuristic Q-Network method is adopted to train the neural network model in the over-the-horizon air combat training environment. Through continuous interaction with the environment, self-learning of the air combat maneuver strategy is realized. The efficiency of the heuristic Q-Network method and effectiveness of the air combat maneuver strategy are verified by simulation experiments.


Entropy ◽  
2020 ◽  
Vol 22 (3) ◽  
pp. 279 ◽  
Author(s):  
Tongle Zhou ◽  
Mou Chen ◽  
Yuhui Wang ◽  
Jianliang He ◽  
Chenguang Yang

To improve the effectiveness of air combat decision-making systems, target intention has been extensively studied. In general, aerial target intention is composed of attack, surveillance, penetration, feint, defense, reconnaissance, cover and electronic interference and it is related to the state of a target in air combat. Predicting the target intention is helpful to know the target actions in advance. Thus, intention prediction has contributed to lay a solid foundation for air combat decision-making. In this work, an intention prediction method is developed, which combines the advantages of the long short-term memory (LSTM) networks and decision tree. The future state information of a target is predicted based on LSTM networks from real-time series data, and the decision tree technology is utilized to extract rules from uncertain and incomplete priori knowledge. Then, the target intention is obtained from the predicted data by applying the built decision tree. With a simulation example, the results show that the proposed method is effective and feasible for state prediction and intention recognition of aerial targets under uncertain and incomplete information. Furthermore, the proposed method can make contributions in providing direction and aids for subsequent attack decision-making.


Sign in / Sign up

Export Citation Format

Share Document