Serdar Yüksel


Professor, Chair of Mathematics & Engineering

Office: Jeffery Hall, Rm. 415
Phone: (613) 533-2429
Email: yuksel@queensu.ca
Website:
Research: Stochastic control theory, stochastic dynamical systems, robustness and learning, information theory, multi-agent systems

Degrees & Accolades:

Ph.D. (University of Illinois at Urbana-Champaign)
M.Sc. (University of Illinois at Urbana-Champaign)
B.Sc. (Bilkent University)

Research Profile:

Our research is on the interaction of control, information, and probability theories and their various application areas.

On stochastic control; we are interested in the optimization of and stochastic stability for controlled Markovian systems, decentralized stochastic control, networked control, robust stochastic control, reinforcement and empirical learning, and approximation methods in stochastic control. Stochastic networked control systems under information constraints has been a recurring theme for our research.

On information theory; we are interested in zero-delay source and channel coding, information theory for systems with feedback, stochastic stability and ergodicity properties, and applications in networked control.

On probability theory and applications; we are interested in the theory of Markov processes, stochastic stability of dynamical systems, and ergodic theory. Recently, we have been interested in exploring connections between the ergodic theory of dynamical systems, information theory, and control.

Research Areas:

My research focuses on stochastic control theory, probability theory and information theory, and their applications. Some of the problems we work on are the following:

Problem 1 – Stochastic Control and Analysis in Complex Systems: Many stochastic systems are complex, with incomplete models, incorrect priors, non-Markovian dynamics, and with only partial or decentralized information. These may include incompletely specified dynamics where a decision maker may have full state information or partial information, or multiagent systems with decentralized and local information, and with either discrete-time or continuous-time dynamics. Our group develops probability and control theoretic methods to arrive at near optimal solutions for such complex systems, and their robustness and approximation properties.

Problem 2 – Learning Theory for Stochastic Control: Reinforcement learning theory allows for arriving at near optimal solutions for problems whose dynamics are unknown or which are too challenging to analytically solve. Empirical learning and Bayesian learning may lead to policies which are near optimal in the presence of data under technical conditions. Many fundamental open problems remain for systems with general state and action spaces and information structures.

Problem 3 – Interaction between Control and Information: There is an intrinsic relation between information and control, and related areas such as game theory. One has to optimally utilize partial information to generate decisions, sometimes via non-linear filtering. The transmission of information over measurement channels may be part of a problem, leading to an information theoretic angle. For some problems, value of additional information is always positive; but for some (in game theory) this may not be the case. Our group works on this relation in a variety of contexts.