Theoretical Fundations

In the area of communication and sensor networks, there is a critical need for fundamental research that sheds light on the capabilities, limitations, and designs of these complex communication and processing systems. Europe (and specifically Spain) needs to explore ideas that go beyond the state of the art in this area. While most of the ongoing work in communication networks relies on detailed engineering and exhaustive measurement, what are needed are novel principles that truly support self-organization and continuous adaptability.

This workpackage covers:

 Frontier research that addresses basic and fundamental topics in wireless communication and sensor networks, and in telecommunications at large.

 Multidisciplinary research that establishes a bridge between the field of communications and others not directly related.

 Multidisciplinary research that can put forth novel theoretical tools capable of properly combining different disciplines.

This workpackage is led by CEIT and UPC, and involes the following partners: UVEG, UDC, UVIGO, UC, UPF, UC3M and US. The activities contained in this workpackage are described in more detail below.

A1.1 - Network Information Theory

Network information theory, also termed multiuser information theory, tries to generalize the seminal work by C. Shannon for point-to-point links to scenarios where there are several sources communicating over a network with several senders and receivers. The framework of this theory, which is still in its infancy, allows considering more than one information source and/or communication channel with the possible use of auxiliary signals to provide channel state information (CSI).

In such scenario, with several senders and receivers sharing a common medium, some important issues cannot be mapped to the original point-to-point framework proposed by Shannon: correlation between users, side information, co-operation, competition, superposition of information, interference, etc. Although some of these questions have been addressed and solved for very specific multiuser setups, numerous open problems remain and there does not yet exist a comprehensive theory of information networks. Despite being primitive in its current form, the theoretical results available thus far show the great potential of such a theory when compared with traditional techniques for point-to-point communication. The benefits are two-fold: a) fundamental performance limits, and b) road to optimality, that is, intuition and key properties to build the optimal codes and processing systems for every scenario.

Besides sensor networks, computer, satellite, and cellular phone systems are examples of large communication networks that could benefit from a further development of this theory. Even within a single computer, a myriad components talk to each other. When compared to its point-to-point counterpart, a complete theory for networks would have wide implications and allow for higher data rates, lower energy consumption, more robustness, and adaptability. The COMONSENS consortium wants to contribute to this quest.

More specifically, among the classical problems to be considered are: the multiple access channel, the broadcast channel, coding for correlated multi-terminal sources, the interference channel, the relay channel, and channels with side or state information. As mentioned previously, many of the fundamental limits of these advanced communication configurations are still to be found (for example, the capacity of a basic two-source two-sink network, also known as Shannon's interference model, is not yet known for general power values).

 Analysis of the network information-theoretic limits for multiple access, broadcast, interference, relay and multiterminal channels under limited channel state information. The topics that require study include the reformulation of the conventional paradigm of separability of source and channel coding, the assumption of (possibly imperfect) CSI at either end of the link(s), and the assertion of the impact of correlation among the sources. The optimization techniques to be considered include generalized waterfilling solutions, convex optimization, non-convex optimization, and game theory (multi-objective optimization).

 Computation of network information-theoretic limits for discrete multiple access and broadcast channels. The computation of the channel capacity of discrete memoryless channels is a convex problem that can be efficiently solved using the popular Blahut-Arimoto algorithm. However, the extension of this algorithm to the computation of capacity regions in multiterminal networks is not straightforward since the problem then loses its convex nature. Efficient iterative algorithms and tools have to be drawn from non-convex optimization theory to explicitly compute inner and outer bounds on the capacity region of these multiuser channels.

 Evaluation of the network information-theoretic limits of cooperative communication paradigms and self-organizing networks. The conventional paradigm of separability between source and channel coding is no longer valid in this context and the availability of imperfect CSI at any of the cooperative links must be taken into account. Furthermore, the performance limit of these self-organized networks needs to be analyzed with respect to the mobility and interference rejection capabilities of the terminals. Hardly any theoretical results exist on the communication capacity of wireless ad-hoc networks of finite size and, even in those cases when the capacity is known, it is not well understood when practical constraints are brought in, such as limited memory buffer capabilities, possible node failures, hard delay constraints, or hard energy constraints.

go top

A1.2 - Network Coding and Information Flows

Traditionally, routing (store-and-forward) has been the main approach for efficient flow of data in large communication networks. In routing, nodes store the received packets in proper queues and subsequently send them out based on intelligent routing schemes. However, in the past few years, and since the introduction of network coding, it has become clear that, if nodes send out functions of their input packets instead of just routing the received packets at each node, the achievable throughput can be substantially improved, particularly in multicast scenarios. This approach, that allows and encourages mixtures of data at intermediate nodes, has attracted great attention for it could directly benefit applications such as the Internet, cellular phone systems, satellite communications, local-area data services, and sensor networks.

Chronologically, it was first shown that, with network coding, the upper bound on the capacity (min-cut bound) for a multicast network can be achieved. Later on, it was shown that linear codes were sufficient to achieve this capacity. Then, distributed randomized network coding was introduced in which nodes independently and randomly select linear mappings from inputs onto outputs over some field. This results in a practical distributed coding scheme in which the aggregate effect of the various random code choices in the network are inexpensively communicated to the receivers as vectors of coefficients within each signal block. This, in turn, attains the multicast capacity with probability exponentially approaching 1 in the length of the codes. Subsequently, it was shown that the use of network codes would improve the network performance in terms of throughput, energy consumption and reliability. Despite this progress, there are still many open problems such as the impact of hard delay constraints or more general traffic communication matrices. Activity A1.2 focuses on the following areas in network coding:

 Distributed correlated sources. This situation is common in sensor networks, where nodes measure a physical process that is naturally correlated at nearby sensors. In general, if two correlated sources are to be transmitted to a single receiver, a separate treatment of distributed compression (i.e., Slepian-Wolf coding) and network coding turns out to be optimal. There, the network supplies multiple paths to stream the Slepian-Wolf-coded symbols to the destination. More involved network coding performing joint compression and packet combining is required when various correlated sources, generated at several nodes, are to be delivered to multiple destinations.

Previous work has considered variants of the separation theorem extended to networks. Several network topologies under certain communication patterns have been analyzed, and the effect of finite memory storage at the nodes has been studied. These contributions, however, have always assumed that the various correlated sources are synchronized in time, which is rarely the case in practice. We propose to investigate the design of distributed source and communication strategies in order to deal with correlated sources not necessarily synchronous. The addition of temporal correlation and/or non-stationary correlation models to the picture are other areas for further investigation.

 Extensions to wireless networks. There is a lot of interest in extending the network coding ideas to wireless networks, which possess three intrinsic characteristics: broadcasting in space, interference, and dynamism. Although it can be shown that, in some practical scenarios, the above characteristics actually favor network coding, the theoretical implications are not yet fully understood. (For example, it can be shown that, although network coding does not improve the capacity of lossless unicast networks, it does so for lossy unicast networks.) This is, therefore, an area that is ripe for productive research.

 Non-multicast and general routing networks. The general problem of transporting information from multiple sources to multiple destinations is still unsolved. More specifically, only an outer bound is known for the case of multisource-multisink coding and this outer bound is tighter than the general max-flow min-cut bound. In this activity, we plan to thoroughly investigate the problem of network coding in networks with general traffic communication matrices.

go top

A1.3 - Cross-disciplinary Mathematical Tools

The design of future communication and sensor networks poses challenges that are too complex to be solved by means of only the various mathematical tools traditionally used in telecommunications. In order to successfully tackle these challenges, new approaches and novel formulations are going to be needed. This constitutes, precisely, the goal of this activity, which will borrow, combine, and expand tools from complementary disciplines such as probability theory, operator algebra, random graph models, machine learning, statistical and theoretical physics, percolation theory, biological systems, game theory, Monte Carlo statistical methods, complexity theory and advanced optimization theory. It is expected that this cross-disciplinary blend will lead to successful synergies and pave the way for a more rapid advancement of the understanding of many crucial problems in communications and sensor networks. Furthermore, these tools are expected to provide theoretical guidelines for the subsequent algorithmic design.

With the above rational as a guideline, the tools and applications to be considered for this task include:

 Techniques from probability theory and operator algebra such as random matrix analysis and free probability theory, which are well suited to the study of large systems with random characteristics. The asymptotic eigenvalue distributions of some classes of random matrices, in particular, have already proved to be key quantities in the design, analysis and modeling of certain regular communication systems such as MIMO multiantenna transceivers or CDMA networks. Extending these findings to less homogeneous systems such as sensor networks is one of the objectives of this activity.

 Percolation and random graph tools will be used to analyze the scaling laws of capacity and the throughput-delay trade-offs in ad-hoc and sensor networks, as well as to study thoroughly the influence of connectivity properties and of spatial density.

 Techniques and concepts developed in machine learning, pattern recognition, and multivariate statistical analysis, can also provide important hints as to how to best solve fundamental problems in advanced communications.

 Game theory provides an interesting mathematical basis for the design of future wireless networks. In this context, each node or user is considered as a player making decisions on issues such as transmit power, packet forwarding, back-off times, etc. We plan to investigate: a) the amount of local information required for effective distributed decisions towards achieving a globally efficient emergent behavior, b) the process of learning in highly dynamic communication scenarios, and c) the influence of mobility in the node interaction.

 Statistical physics, such as the replica method originally developed for the analysis of spin glasses, can also be helpful in evaluating the performance of large communication networks.

 Biological systems are characterized by their high degree of self-organization, scalability, autonomous operation and robustness against failures and impairments. Understanding the mechanisms that underlie these traits may lead to new wireless network arrangements and procedures. We will investigate: a) self-organized communication protocols based on reaction-diffusion biological patterns, b) highly adaptive routing based on attractor-selection mechanisms such as in-gene networks, c) robust decentralized synchronization based on mutual coupling of biological oscillators and d) novel epidemic methods in networking.

 Novel methods from optimization theory will be also investigated, such as alternative optimization decomposition methods leading to new distributed algorithms, reverse-engineering of co-operative communication protocols, and operational duality between source and channel codes.

go top