Sparse and LowRank Optimization for Dense Wireless Networks: Models, Algorithms and TheoryPresenters:
Tutorial Materials:
SummaryAs mobile data traffic keeps growing at an exponential rate, and mobile applications pose more and more stringent and diverse requirements, wireless networks are facing unprecedented pressures. To further evolve wireless networks and maintain their competitiveness, network densification stands out as a promising approach. By deploying more access points, possibly with different capabilities, we can not only increase network capacity, but also improve network energy efficiency, enable lowlatency mobile applications, and provide access for massive mobile devices. However, this will also bring formidable challenges to network optimization and resource allocation, given the highly complex network topology, the massive amount of required side information, and the high computational requirement. Typical design problems will be nonconvex in nature, and of enormously large scales. Thus disruptive techniques will be needed to fully exploit the benefits of dense wireless networks. The aim of this tutorial is to present recent advances in sparse and lowrank techniques for optimizing dense wireless networks, with a comprehensive coverage including modeling, algorithm design, and theoretical analysis. Through typical examples, the powerfulness of this set of tools will be demonstrated, and their abilities in solving key design problems in dense wireless networks will be highlighted.
Structured Sparse Optimization for LargeScale Network AdaptationNetwork densification leads to largescale network optimization problems involving both discrete and continuous decision variables, which motivates us to enforce sparsity structures in the solutions. Examples include radio access point selection, backhual data assignment, user admission control, massive device connectivity, etc. The structured sparsity harnesses the benefits of modeling flexibility and algorithmic speedups. However, the sparse function is nonconvex and is thus computationally expensive. Furthermore, sparse optimization problems in dense wireless networks have more complicated structures, and new algorithmic and theoretical challenges arise. We will present novel and efficient convex relaxation methodologies for both objectives and constraints, followed by various smoothing techniques.
Generalized LowRank Optimization with Network Side InformationDense wireless networks are highly complex to optimize, for which it is critical to exploit the available network side information. For example, network connectivity information, cached content at the access points, and locally computed intermediate values, all serve as exploitable side information for efficiently designing coding and decoding in dense wireless networks. We provide a generalized lowrank matrix modeling framework to exploit the network side information, which helps to efficiently optimize across the communication, computation, and storage resources. Unfortunately, the rank function is nonconvex and is thus not computationally feasible. Furthermore, convex relaxation approaches are inapplicable for generalized lowrank optimization problems with poor structures in dense wireless networks. We will introduce a recent proposal of nonconvex paradigms by optimizing over the nonconvex rank constraints directly via Riemannian optimization and matrix factorization.
LargeScale Convex and Nonconvex Optimization: Algorithms and AnalysisNumerical optimization algorithms can be classified in terms of first versus second order methods, depending on whether they use only gradientbased information versus calculations of both the first and second derivatives. The convergence rates of secondorder methods are usually faster with the caveat that each iteration is more expensive. In general, there is a tradeoff between the periteration computation cost versus the total number of iterations, though firstorder methods often scale better to largescale optimization problems. While optimization problems in communication systems are typically solved in the convex paradigm with the secondorder methods, thanks to the ease of use of the CVX toolbox, we have observed the necessity of the firstorder methods and the importance of the nonconvex paradigm. We develop and analyse the largescale convex and nonconvex optimization algorithms by developing and leveraging the tools from the operator splitting theory, random matrix theory, learning theory, convex geometry and differential geometry.
