Proceedings of the 2000 American Control Conference. ACC (IEEE Cat. No.00CH36334), 2000
Approximate solution of a general N-stage stochastic optimal control problem is considered. It is... more Approximate solution of a general N-stage stochastic optimal control problem is considered. It is known that discretizing uniformly the state components in applying dynamic programming may lead this procedure to incur the “curse of dimensionality”. Approximating networks, i.e., linear combinations of parametrized basis functions provided with density properties in some normed linear spaces, are then defined and used in two
Functional optimization problems can be solved analytically only if special assumptions are verif... more Functional optimization problems can be solved analytically only if special assumptions are verified; otherwise, approximations are needed. The approximate method that we propose is based on two steps. First, the decision functions are constrained to take on the struc- ture of linear combinations of basis functions containing free param- eters to be optimized (hence, this step can be considered as
Proceedings of the 39th IEEE Conference on Decision and Control (Cat. No.00CH37187), 2000
Estimation problems are addressed for continuous-time, nonlinear dynamic systems in a general Lp ... more Estimation problems are addressed for continuous-time, nonlinear dynamic systems in a general Lp framework. In this setting, the connection between the observation and the filtering problems is investigated. Under some regularity assumptions for the nonlinearities and suitable bounds on the Lp norm of the noises, it is proved that the same hypotheses sufficient to design an exponential observer for a
Observers design is addressed for a class of continuous-time, nonlinear dynamic systems with Lips... more Observers design is addressed for a class of continuous-time, nonlinear dynamic systems with Lipschitz nonlinearities. A full-order state estimator is considered that depends on an innovation function made up of two terms: a linear gain and a feedforward neural network that provides a nonlinear contribution. The gain and the weights of the neural network are chosen in such way to ensure the convergence of the estimation error. Such a goal is achieved by constraining the derivative of a Lyapunov function to be negative definite on a sampling grid of points. Under assumptions on the smoothness of the Lyapunov function and of the distribution of the sampling points, the negative definiteness of the derivative of the Lyapunov function is obtained by minimizing a cost function that penalizes the constraints that are not satisfied. Suitable sampling techniques allow to reduce the computational burden required by the network's weights optimization. Simulations results are presented to...
... Examples of Problem PM are the T-stage stochastic single-person optimal decision problem (whe... more ... Examples of Problem PM are the T-stage stochastic single-person optimal decision problem (where M ... For the particular case of various functional optimization problems associated with learning from data ... Accuracy of suboptimal solutions to kernel principal component analysis. ...
Proceedings of the 2000 American Control Conference. ACC (IEEE Cat. No.00CH36334), 2000
Approximate solution of a general N-stage stochastic optimal control problem is considered. It is... more Approximate solution of a general N-stage stochastic optimal control problem is considered. It is known that discretizing uniformly the state components in applying dynamic programming may lead this procedure to incur the “curse of dimensionality”. Approximating networks, i.e., linear combinations of parametrized basis functions provided with density properties in some normed linear spaces, are then defined and used in two
Functional optimization problems can be solved analytically only if special assumptions are verif... more Functional optimization problems can be solved analytically only if special assumptions are verified; otherwise, approximations are needed. The approximate method that we propose is based on two steps. First, the decision functions are constrained to take on the struc- ture of linear combinations of basis functions containing free param- eters to be optimized (hence, this step can be considered as
Proceedings of the 39th IEEE Conference on Decision and Control (Cat. No.00CH37187), 2000
Estimation problems are addressed for continuous-time, nonlinear dynamic systems in a general Lp ... more Estimation problems are addressed for continuous-time, nonlinear dynamic systems in a general Lp framework. In this setting, the connection between the observation and the filtering problems is investigated. Under some regularity assumptions for the nonlinearities and suitable bounds on the Lp norm of the noises, it is proved that the same hypotheses sufficient to design an exponential observer for a
Observers design is addressed for a class of continuous-time, nonlinear dynamic systems with Lips... more Observers design is addressed for a class of continuous-time, nonlinear dynamic systems with Lipschitz nonlinearities. A full-order state estimator is considered that depends on an innovation function made up of two terms: a linear gain and a feedforward neural network that provides a nonlinear contribution. The gain and the weights of the neural network are chosen in such way to ensure the convergence of the estimation error. Such a goal is achieved by constraining the derivative of a Lyapunov function to be negative definite on a sampling grid of points. Under assumptions on the smoothness of the Lyapunov function and of the distribution of the sampling points, the negative definiteness of the derivative of the Lyapunov function is obtained by minimizing a cost function that penalizes the constraints that are not satisfied. Suitable sampling techniques allow to reduce the computational burden required by the network's weights optimization. Simulations results are presented to...
... Examples of Problem PM are the T-stage stochastic single-person optimal decision problem (whe... more ... Examples of Problem PM are the T-stage stochastic single-person optimal decision problem (where M ... For the particular case of various functional optimization problems associated with learning from data ... Accuracy of suboptimal solutions to kernel principal component analysis. ...
Uploads
Papers by Marcello Sanguineti