Large Language Models in Wireless Networks: Fundamentals, Applications, and Future Prospects
Large language models (LLMs), a subfield of semantic natural language processing and generative artificial intelligence (AI), have recently attracted a considerable attention from the research community as a revolutionary technology in the field of AI, and a gateway to realize the ultimate vision of self-evolving networks. Being a promising candidate to revolutionize several technologies in various fields, we believe that generative AI models can introduce a radical change in wireless network design and operation. This tutorial aims to provide a comprehensive overview of the interplay between LLMs and wireless networks, covering key principles, use-cases and applications, and future prospects. In more details, we will elucidate the fundamental principles of LLMs, highlighting their underlying architecture, training methodology, and inherent capabilities from natural language to multi-modality understanding and generation. Subsequently, we will focus on the roles of LLMs in future wireless communication systems. On the one hand, we will explore how LLMs can enable autonomy in wireless networks, such as understanding wireless data and enabling wide range of wireless tasks. Example use cases of adaptive beamforming, super-resolution localization, and wireless sensing will be discussed. On the other hand, we will discuss how future networks could realize collective intelligence from massive LLMs devices, such as knowledge-driven planning and reasoning, semantic communication and protocol learning, multi-agent game and cooperative control. Example use cases of connected autonomous vehicles and intent-driven networking will be discussed. Finally, we will highlight the limitations and challenges that are anticipated to arise from building to integrating LLMs into wireless networks, and sketch the roadmap towards fully self-built and self-evolved wireless networks.