Control and communications have traditionally been different areas with little overlap. Until the 1990s it was common to decouple the communication issues from consideration of state estimation or control problems. In particular, in the classic control and state estimation theory, the standard assumption is that all data transmission required by the algorithm can be performed with infinite precision in value. In such an approach, control and communication components are treated as totally independent. This considerably simplifies the analysis and design of the overall system and mostly works well for engineering systems with large communication bandwidth. However, in some recently emerging applications, situations are encountered where observation and control signals are transmitted via a communication channel with a limited capacity. For instance, this issue may arise with the transmission of control signals when a large number of mobile units needs to be controlled remotely by a single decision maker. Since the radio spectrum is limited, communication constraints are a real concern. In [199], the design of large-scale control systems for platoons of underwater vehicles highlights the need for control strategies that address reduced communications, since communication bandwidth is severely limited underwater. Other recent emerging applications are micro-electromechanical systems and mobile telephony.
On the other hand, for complex networked sensor systems containing a very large number of low-power sensors, the amount of data collected by the sensors is too large to be transmitted in full via the existing communication channel. In these problems, classic control and state estimation theory cannot be applied since the controller/state estimator only observes the transmitted sequence of finite-valued symbols. So it is natural to ask how much transmission capacity is needed to achieve a certain control goal or a specified state estimation accuracy. The problem becomes even more challenging when the system contains multiple sensors and actuators transmitting and receiving data over a shared communication network. In such systems, each module is effectively allocated only a small portion of the network total communication capacity.
Another shortcoming of the classic control and estimation theory is the assumption that data transmission and information processing required by the control/ estimation algorithm can be performed instantaneously. However, in complex real-world networked control systems, data arrival times are often delayed, irregular, time-varying, and not precisely known, and data may arrive out of order. Moreover, data transferred via a communication network may be corrupted or even lost due to noise in the communication medium, congestion of the communication network, or protocol malfunctions. The problem of missing data may also arise from temporary sensor failures. Examples arise in planetary rovers, arrays of microactuators, and power control in mobile communications. Other examples are offered by complex dynamic processes like advanced aircraft, spacecraft, and manufacturing processes, where time division multiplexed computer networks are employed for exchange of information between spatially distributed plant components.
On the other hand, for many complex control systems, it can be desirable to distribute the control task among several processors, rather than using a single central processor. If these processors are not triggered by a common clock pulse, and their computation, sampling, and hold activities are not synchronized, we call them asynchronous controllers. In addition, these processors need not operate with the same sampling rate, and so-called multirate sampling in control systems has been of interest since the 1950s (see, e.g., [54,80,230]). The sampling rates of the controllers are typically assumed to be precisely known and integrally proportional, and sampling is synchronized to make the sampling process periodic, with a period equal to an integral multiple of the largest sampling period. However, in many practical situations, the sampling times are irregular and not precisely known. This occurs, for example, when a large-scale computer controller is time-shared by several plants so that control signals are sent out to each plant at random times. It should be pointed out that the multitask allocation for large multiprocessor computers is a very complex and practically nondeterministic process. In fact, the problem of uncertain and irregular sampling times often faces engineers when they use multiprocessor computer systems and communication networks for operation and control of complex physical processes. In all these applications, communication issues are of real concern.
Another rapidly emerging area is cooperative control of multiagent networked systems, especially formations of autonomous unmanned vehicles; see, e.g., [9, 51, 76, 159, 160, 169]. The key challenge in this area is the problem of cooperation between a group of agents performing a shared task using interagent communication. The system is decentralized, and decisions are made by each agent using limited information about other agents and the environment. Applications include mobile robots, unmanned aerial vehicles (UAVs), automated highway systems, sensor networks for spatially distributed sensing, and microsatellite clusters. In all these applications, the interplay between communication network properties and vehicle dynamics is crucial. This class of problems represents a difficult and exciting challenge in control engineering and is expected to be one of the most important areas of control theory in the near future.