scholarly journals Stochastic Recruitment: A Limited-Feedback Control Policy for Large Ensemble Systems

Author(s):  
Lael Odhner ◽  
Harry Asada
Author(s):  
Jianjiang Cui ◽  
Cheng’en Liu ◽  
Chen Zhang

A new control method of Shape Memory Alloy (SMA) actuators based on stochastic recruitment is proposed. From the perspective of bionic control, the physiological basis of stochastic recruitment limited feedback control of artificial muscle is explained. The limited feedback control system of two-state cells and multi-state cells are established, and the limited feedback control law is deduced for these two kinds of systems. In the Simulink environment, two kinds of limited feedback control algorithms are simulated and compared, which verifies the feasibility of the algorithm. Aiming at the key problem of scaling coefficient in the system, the longitudinal and transverse comparative experiments are carried out to analyze the influence of the scaling coefficient on the system.


2012 ◽  
Vol 433-440 ◽  
pp. 7089-7096
Author(s):  
Qing Quan Liu

This paper addresses the dynamic output feedback stabilization problem for linear time-invariant systems where the process disturbance preview is available to the controller via communication networks. A lower bound of data rates of communication channels, above which there exists a feedback control policy to stabilize the unstable plant with unbounded disturbance, is presented. Furthermore, the problem of bandwidth allocation in the communication channel is analyzed based on the system dynamics. Simulation results show the validity of the proposed scheme.


Author(s):  
Javad Sovizi ◽  
Suren Kumar ◽  
Venkat Krovi

Abstract We present a computationally efficient approach for the intra-operative update of the feedback control policy for the steerable needle in the presence of the motion uncertainty. The solution to dynamic programming (DP) equations, to obtain the optimal control policy, is difficult or intractable for nonlinear problems such as steering flexible needle in soft tissue. We use the method of approximating Markov chain to approximate the continuous (and controlled) process with its discrete and locally consistent counterpart. This provides the ground to examine the linear programming (LP) approach to solve the imposed DP problem that significantly reduces the computational demand. A concrete example of the two-dimensional (2D) needle steering is considered to investigate the effectiveness of the LP method for both deterministic and stochastic systems. We compare the performance of the LP-based policy with the results obtained through more computationally demanding algorithm, iterative policy space approximation. Finally, the reliability of the LP-based policy dealing with motion and parametric uncertainties as well as the effect of insertion point/angle on the probability of success is investigated.


2015 ◽  
Vol 114 (4) ◽  
pp. 2187-2193 ◽  
Author(s):  
Shoko Kasuga ◽  
Sebastian Telgen ◽  
Junichi Ushiba ◽  
Daichi Nozaki ◽  
Jörn Diedrichsen

When we learn a novel task, the motor system needs to acquire both feedforward and feedback control. Currently, little is known about how the learning of these two mechanisms relate to each other. In the present study, we tested whether feedforward and feedback control need to be learned separately, or whether they are learned as common mechanism when a new control policy is acquired. Participants were trained to reach to two lateral and one central target in an environment with mirror (left-right)-reversed visual feedback. One group was allowed to make online movement corrections, whereas the other group only received visual information after the end of the movement. Learning of feedforward control was assessed by measuring the accuracy of the initial movement direction to lateral targets. Feedback control was measured in the responses to sudden visual perturbations of the cursor when reaching to the central target. Although feedforward control improved in both groups, it was significantly better when online corrections were not allowed. In contrast, feedback control only adaptively changed in participants who received online feedback and remained unchanged in the group without online corrections. Our findings suggest that when a new control policy is acquired, feedforward and feedback control are learned separately, and that there may be a trade-off in learning between feedback and feedforward controllers.


2020 ◽  
Author(s):  
Guanlin Li ◽  
Shashwat Shivam ◽  
Michael E. Hochberg ◽  
Yorai Wardi ◽  
Joshua S Weitz

Lockdowns and stay-at-home orders have partially mitigated the spread of Covid-19. However, the indiscriminate nature of mitigation - applying to all individuals irrespective of disease status - has come with substantial socioeconomic costs. Here, we explore how to leverage the increasing reliability and scale of both molecular and serological tests to balance transmission risks with economic costs involved in responding to Covid-19 epidemics. First, we introduce an optimal control approach that identifies personalized interaction rates according to an individual's test status; such that infected individuals isolate, recovered individuals can elevate their interactions, and activity of susceptible individuals varies over time. Critically, the extent to which susceptible individuals can return to work depends strongly on isolation efficiency. As we show, optimal control policies can yield mitigation policies with similar infection rates to total shutdown but lower socioeconomic costs. However, optimal control policies can be fragile given mis-specification of parameters or mis-estimation of the current disease state. Hence, we leverage insights from the optimal control solutions and propose a feedback control approach based on monitoring of the epidemic state. We utilize genetic algorithms to identify a 'switching' policy such that susceptible individuals (both PCR and serological test negative) return to work after lockdowns insofar as recovered fraction is much higher than the circulating infected prevalence. This feedback control policy exhibits similar performance results to optimal control, but with greater robustness to uncertainty. Overall, our analysis shows that test-driven improvements in isolation efficiency of infectious individuals can inform disease-dependent interaction policies that mitigate transmission while enhancing the return of individuals to pre-pandemic economic activity.


Author(s):  
Tomohiko Takei ◽  
Stephen G. Lomber ◽  
Douglas J. Cook ◽  
Stephen H. Scott

SummaryGoal-directed motor corrections are surprisingly fast and complex, but little is known on how they are generated by the central nervous system. Here we show that temporary cooling of dorsal premotor cortex (PMd) or parietal area 5 (A5) in behaving monkeys caused impairments in corrective responses to mechanical perturbations of the forelimb. Deactivation of PMd impaired both spatial accuracy and response speed, whereas deactivation of A5 impaired spatial accuracy, but not response speed. Simulations based on optimal feedback control demonstrated that ‘deactivation’ of the control policy (reduction of feedback gain) impaired both spatial accuracy and response speed, whereas ‘deactivation’ in state estimation (reduction of Kalman gain) impaired spatial accuracy but not response speed, paralleling the impairments observed from deactivation of PMd and A5, respectively. Furthermore, combined deactivation of both cortical regions led to additive impairments of individual deactivations, whereas reducing the amount of cooling (i.e. milder cooling) to PMd led to impairments in response speed, but not spatial accuracy, both also predicted by the model simulations. These results provide causal support that higher order motor and somatosensory regions beyond primary somatosensory and primary motor cortex are involved in generating goal-directed motor responses. As well, the computational models suggest that the distinct patterns of impairments associated with these cortical regions reflect their unique functional roles in goal-directed feedback control.


2018 ◽  
Vol 30 (4) ◽  
pp. 1104-1131 ◽  
Author(s):  
Kyuengbo Min ◽  
Masami Iwamoto ◽  
Shinji Kakei ◽  
Hideyuki Kimpara

Humans are able to robustly maintain desired motion and posture under dynamically changing circumstances, including novel conditions. To accomplish this, the brain needs to optimize the synergistic control between muscles against external dynamic factors. However, previous related studies have usually simplified the control of multiple muscles using two opposing muscles, which are minimum actuators to simulate linear feedback control. As a result, they have been unable to analyze how muscle synergy contributes to motion control robustness in a biological system. To address this issue, we considered a new muscle synergy concept used to optimize the synergy between muscle units against external dynamic conditions, including novel conditions. We propose that two main muscle control policies synergistically control muscle units to maintain the desired motion against external dynamic conditions. Our assumption is based on biological evidence regarding the control of multiple muscles via the corticospinal tract. One of the policies is the group control policy (GCP), which is used to control muscle group units classified based on functional similarities in joint control. This policy is used to effectively resist external dynamic circumstances, such as disturbances. The individual control policy (ICP) assists the GCP in precisely controlling motion by controlling individual muscle units. To validate this hypothesis, we simulated the reinforcement of the synergistic actions of the two control policies during the reinforcement learning of feedback motion control. Using this learning paradigm, the two control policies were synergistically combined to result in robust feedback control under novel transient and sustained disturbances that did not involve learning. Further, by comparing our data to experimental data generated by human subjects under the same conditions as those of the simulation, we showed that the proposed synergy concept may be used to analyze muscle synergy–driven motion control robustness in humans.


Author(s):  
Javad Sovizi ◽  
Suren Kumar ◽  
Venkat Krovi

Bevel-tip flexible needles allow for reaching remote/inaccessible organs while avoiding the obstacles (sensitive organs, bones, etc.). Motion planning and control of such systems is a challenging problem due to the uncertainty induced by needle-tissue interactions, anatomical motions (respiratory and cardiac induced motions), imperfect actuation, etc. In this paper, we use an analogy where steering the needle in a soft tissue subject to the uncertain anatomical motions is compared to the Dubins vehicle traveling in the stochastic wind field. Achieving the optimal feedback control policy requires solution of a dynamic programming problem that is often computationally demanding. Efficiency is not central to many optimal control algorithms that often need to be computed only once for a given system/noise statistics. However, intraoperative policy updates may be required for adaptive or patient-specific models. We use the method of approximating Markov chain to approximate the continuous (and controlled) process with its discrete and locally consistent counterpart. We examine the linear programming method of solving the imposed dynamic programming problem that significantly improves the computational efficiency in comparison to the state-of-the-art approaches. In addition, the probability of success and failure are simply the variables of the linear optimization problem and can be directly used for different objective definitions. A numerical example of the 2D needle steering problem is considered to investigate the effectiveness of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document