Xinran Zhang, Jingyuan Liu, Tao Hu, Zheng Chang, Yanru Zhang, Geyong Min
©SHUTTERSTOCK.COM/TEMP-64GTX
Recently, realizing machine learning (ML)-based technologies with the aid of mobile edge computing (MEC) in the vehicular network to establish an intelligent transportation system (ITS) has gained considerable interest. To fully utilize the data and onboard units of vehicles, it is possible to implement federated learning (FL), which can locally train the model and centrally aggregate the results, in the vehicular edge computing (VEC) system for a vision of connected and autonomous vehicles. In this article, we review and present the concept of FL and introduce a general architecture of FL-assisted VEC to advance development of FL in the vehicular network. The enabling technologies for designing such a system are discussed and, with a focus on the vehicle selection algorithm, performance evaluations are conducted. Recommendations on future research directions are highlighted as well.
In recent years, ITSs have emerged as a promising solution for improving transportation efficiency and safety. As a key ITS technique, vehicular ad hoc networks (VANETs) provide wireless connections among vehicles, roadside units (RSUs), pedestrians, and drivers to enable information exchange. However, with massive amounts of sensors generating extensive onboard data, processing and analyzing data with low latency poses a challenge [1]. To satisfy both real-time and reliability criteria, MEC has gained growing interest by providing enormous computation, storage, and communication resources [2]. To enable the computation in proximity to the vehicular data source, VEC integrates both MEC and VANETs by allowing vehicles’ convenient and on-demand access to the network’s edges [3], like RSUs, the base station, and surrounding vehicles.
To pave the way to intelligent VEC by exploiting large-volume vehicular data, ML has attracted much attention, particularly in complicated decision problems. In classical ML methods, all clients’ data are transmitted to the central server for processing to learn a model. Nevertheless, clients face data misuse and leakage risks in the uplink transmission, and constrained communication resources result in unexpected latency. To solve these problems, FL was proposed in [4] to allow clients to learn a model collaboratively while keeping their personal data on the devices. Rather than sending raw data, clients train neural networks locally and share these model parameters with the server. After collecting local parameters, the server aggregates them into an updated model and then broadcasts it back to clients for further training. To support FL in vehicular scenarios, the authors in [5] propose the federated vehicular network (FVN) and enumerate enabling technologies and the architecture of FVNs.
FL facilitates a more reliable, efficient, and secure VEC system for several reasons. First, FL allows drivers to participate in the training without compromising their data privacy, resulting in higher vehicle engagement and therefore more distributed vehicular data for exploration. Second, FL reduces transmission time significantly as local models, instead of raw data, are uploaded to the server, which mitigates communication bottlenecks. Third, personal data are utilized locally by vehicles, thereby alleviating data leakage risks. These advantages support promising applications such as cooperative autonomous driving, intelligent transportation navigation, traffic signal operation, electrical vehicle charging management, automated traffic law enforcement, and so forth. However, FL-assisted VEC encounters difficulties like expensive communication overhead, poor learning performance, limited resources, and low client motivation for participation. To address these issues, numerous works focus on adaptive resource allocation, enhanced algorithm development, client selection, and incentive mechanisms.
In particular, aiming at enhancing the accuracy and efficiency of FL, Ye et al. [6] presented a selective model aggregation method and leveraged contract theory to alleviate information asymmetry. Liu et al. [7] proposed a customized, partial, and flexible FL approach (FedCPF) to mitigate aggregation noise and improve system efficiency through customized local training, flexible aggregation, and partial vehicle selection dependent on local data size. To deal with the high mobility of vehicles and constrained resources that increase the uncertainty of learning performance and training latency, Li et al. [8] designed a client selection and resource allocation scheme. A joint optimization problem was solved by the deep reinforcement learning-based caching approach federated deep reinforcement learning.
In this article, we are motivated to investigate the FL-assisted VEC system in light of the growing attention toward ITSs and the booming development of FL. To the best of our knowledge, this work represents an early attempt to provide a comprehensive overview of FL-assisted VEC. We first present background information about the FL procedure. Then, we introduce the overall architecture of the FL-assisted VEC system and identify key enabling technologies to ensure that the system satisfies the most stringent criteria. Next, to show the significance of enabling technologies for system adaptation and improvement, we conduct a case study that focuses on the client selection scheme. We observe that the vehicles’ high-mobility characteristics, dynamic wireless channels, heterogeneous vehicular systems, and statistical heterogeneity incur great uncertainty on the convergence performance of FL, so we propose a mobility- and data-aware vehicle selection scheme to mitigate these impacts. Finally, we discuss future research directions for FL-assisted VEC and suggest potential solutions that could be pursued to address the problems.
FL is a decentralized and collaborative ML method that involves collecting, aggregating, and analyzing data from multiple sources without compromising data privacy or security. As a leading FL method, federated averaging (FedAvg) was first proposed in [4] to alleviate data leakage risk and communication burdens. In FL, clients train a global model collaboratively while keeping the generated data on their own devices. The training process is conducted in multiple rounds, one of which is discussed in the next section.
Due to limited communication resources, only a subset of clients is picked by the server to participate in FL training. After that, the server broadcasts the latest global model to the selected clients.
Each selected client conducts several local training rounds with its local data to update the local model. To minimize the loss function value in the local model, the stochastic gradient descent approach is employed, mostly by randomly retrieving a small batch size of data to reduce extra computation time.
Each selected client transmits the updated model to the server via wireless communication channels. The uploading time depends on the size of the model and the wireless channel conditions.
After collecting the updated local models, the server aggregates them into a global model. The aggregation strategy varies in different scenarios, and the server simply averages all the local parameters in FedAvg.
These four steps keep repeating until the loss function becomes convergent or the maximum iteration number is reached. When the iterations stop, all vehicles can download the global model for further applications. Today, FL is deployed in a wide range of fields, including health care, blockchain, the Internet of Things, smart cities, and so on. FL is particularly appealing for VEC systems as vehicles are equipped with various sensors and increasing computation resources. Hence, they can train ML models locally to protect data privacy and reduce communication overhead, which enables time-intensive decisions for critical vehicular applications.
This section first elaborates on the general framework of FL-assisted VEC and the details of the FL process. Then, the enabling technologies used to advance FL-assisted VEC are discussed.
The FL-assisted VEC system comprises two main entities and three virtual layers, as illustrated in Figure 1. Both the edge servers located at the roadside and surrounding vehicles are the main entities that can participate in the FL process. Vehicles can locally use the data to train the model, and the edge server can perform global aggregation, distribute certain tasks to the vehicles, and exchange information with each other. In the context of data flows, the functions of FL-assisted VEC can be categorized into three layers: data source, data storage, and ML. The data source layer consists of the vehicle and on-road data, roadside infrastructure data, environment data, and other sensed data. The data can be further utilized and preanalyzed in the data storage layer, which can be prepared for further learning and training. In the ML layer, the learning, training, optimization, and aggregation algorithm can be implemented to generate a global model. The coordination of three layers ensures that FL-assisted VEC can be realized in an efficient way, and that data sharing among different entities can be explored for real-time and resource-efficient services.
Figure 1 An FL-assisted VEC system framework.
Figure 2 depicts the specific FL training workflow in VEC, which is similar to the processes introduced in the “Preliminary” section. In FL-assisted VEC, the edge server is capable of computing and allocating tasks over a number of selected vehicles. Hence, it can analyze surrounding information from a broader perspective and provide analytic results to the vehicles to accelerate the learning process. Despite this benefit, the system faces unique difficulties as follows. Unlike traditional FL in stationary scenarios, vehicles continue to move in VEC systems, which has a great impact on FL accuracy. Therefore, the vehicle selection algorithm should consider the varying length of time that vehicles stay within an edge server’s coverage. Furthermore, vehicles equipped with different sensors generate not independent and identically distributed (non-IID) and unbalanced data, leading to significant variations in the local models.
Figure 2 An FL training flow diagram in the VEC system.
As we can see from Figures 1 and 2, steps 1 and 3 are vehicles-to-infrastructure communication in the VANETs. Step 2 concerns the interaction of the data storage and data source layer, and steps 2 and 4 concern the interaction of the data storage and ML layers.
The deployment of FL-assisted VEC heavily depends on the enabling technologies in the following four fields, and the combination of all these technologies advances the development of effective and efficient VEC systems.
FL-assisted VEC involves different parties, such as vehicles, edge servers, and heterogeneous resources like vehicles’ and edge servers’ computation resources and radio resources for data delivery. To achieve the desired objective of minimizing FL loss, the optimization of heterogeneous computation and radio resources is necessary. In the computation resource allocation problem, the resources can be either computing units of a vehicle or an edge computing server. Essentially, increasing the usage of computation resources at the vehicle for FL local computation leads to better local learning accuracy at the cost of higher energy consumption. Additionally, effective data transmission over the wireless network is also a bottleneck for realizing FL-assisted VEC, especially due to the unique challenges presented by high-mobility networks. Therefore, joint optimization of computation and radio resources has become an essential research area of focus in the development of FL-assisted VEC systems. Specifically, computation optimization at the server end in the vehicular networks with long-term quality-of-service requirements is studied in [9], which aims to maximize the data rate while guaranteeing delay constraints. Moreover, the time-varying topology and rapidly changing channels in vehicle-to-vehicle (V2V) communications are investigated in [10]. To improve communication rates, the authors propose a new federated multiagent deep learning algorithm where V2V agents train the local model based on their observed environmental conditions, and all the local models are aggregated to learn a global model to jointly optimize the channel allocation and power assignment.
As the VEC system with dynamic and complex features meets challenges, it is essential to explore advanced learning schemes that can satisfy ever-increasing criteria. First, to deal with vehicular heterogeneous data due to vehicles’ varying sensors and differing driving habits, numerous algorithms are developed. These algorithms substitute the objective from the minimization of the global loss for a more personalized objective, which is better suited for varied data. Then, the limited wireless resources during uplink transmission result in long uplink transmission latency. To mitigate communication latency, compression algorithms including quantization and sparsification are employed. Finally, vehicles continue to move and are likely to disconnect from the edge server. Hence, it is necessary to design robust and adaptive aggregation schemes rather than rely on a simple average of all the models.
The dynamic federated proximal (DFP) algorithm is developed in [11], with the purpose of a rapid adaptation to transportation changes. In the DFP algorithm, the authors aim to minimize the sum of the global loss function and an L2 regularizer to mitigate the negative impacts of non-IID data. In [12], as the clients in the VANETs are significantly different from each other, each client applies the heterogeneous model quantization to reduce the communication cost, and the FL with the double optimization algorithm is proposed to minimize the weighted sum of these local parameters in the server end.
Due to limited communication resources, the server selects a subset of vehicles for training. Then, these selected vehicles upload the local gradients to the server for global aggregation. However, in VEC scenarios, the data and hardware conditions of the vehicles vary significantly, and vehicles are highly moving on the road; thereby, the selected vehicles have profound impacts on FL performance, energy consumption and communication cost. For example, due to system heterogeneity such as limited computation and radio resources, vehicles may fail to complete their training within the allowed time duration, increasing the uncertainty in model performance. Moreover, vehicles’ positions and velocities constantly change, and they are likely to quickly enter the next edge server’s coverage. What’s more, with different data quality and data amounts, vehicles contribute differently to the global model. Hence, it is of great significance to select appropriate vehicles by considering the mentioned factors. Aside from the aforementioned works [7], [8], a multicriteria-based vehicle selection method is proposed in [13]. The authors jointly consider the CPU’s computational capacities, memory storage, energy consumption, and training time to predict whether they can finish the FL tasks.
In FL-assisted VEC, each vehicle keeps private data in the device for local training, and then uploads the updated model to the server. However, due to privacy and energy concerns, the involved vehicles contributing personal data, communication, and computational resources may not be reluctant to participate in FL tasks. Thus, it is necessary to design incentive mechanisms to motivate vehicles. In [14], the authors leverage a three-layer Stackelberg game to incentivize edge devices, where contract theory is applied to formalize the interaction between edge devices and edge servers. To develop a high-quality model with communication efficiency, a two-stage Stackelberg game is formulated in [15] to facilitate the interaction between the MEC server and clients.
In FL-assisted VEC, both the vehicles and edge computing servers seek to maximize their own utility functions, which are generally considered net revenue. The vehicles’ cost can involve data privacy loss, computational energy consumption, and uploading energy consumption, while the reward can be calculated as the obtained profit paid by edge computing servers. For the edge servers, the cost can be represented as the payment to vehicles, and the reward is the benefits for completing FL tasks and achieving a certain accuracy level. By updating the decision variables in an iterative manner, the two parties reach a Nash equilibrium. Reasonable incentives encourage more vehicles to participate in FL tasks, thus ensuring a highly accurate and robust FL model.
In this section, we conduct a case study with an emphasis on the vehicular selection scheme, demonstrating the effectiveness of the aforementioned enabling technologies. To improve FL performance while reducing communication overhead, we propose a novel metric to evaluate the impacts of vehicles’ mobility and system heterogeneity on the system.
Because vehicles continue to move on the road and wireless channels are uncertain, the VEC system is highly complicated. Moreover, variations in sensors and hardware among vehicles result in significant differences in data and computation capacities. To reduce convergence time and enhance learning accuracy, we propose a probabilistic vehicle selection scheme based on the next two factors.
Considering the wide-ranging speeds and directions of vehicles on the road, they can leave the edge server’s coverage area quickly. Particularly when wireless channels are uncertain, vehicles may have insufficient staying time to finish the training. To measure the effects of vehicles’ high mobility and the uncertain wireless channels, we formulate the likelihood of each vehicle to complete the local computation and uplink transmission within the allowed time.
Because each FL iteration is decoupled from the other, we consider the calculation in one FL iteration in the following context to simplify further analysis. Specifically, we denote the distance of vehicle k from the edge server as dk, with its data size and computation capacities being sk and fk, respectively. We assume the uplink channels to be Rayleigh-fading channels to set up a statistical Computer Security Institute model. To identify a trustworthy and efficient subset of vehicles, we calculate the mobility-aware probability ${U}^{{m}}{(}\,{\cdot}\,{)}$ by referring to [11, eq. (39)] as \[{U}^{{m}} = {g}\left({\frac{{h}\left({{a}{-}\frac{{s}_{k}\text{b}}{{f}_{k}}}\right)}{{d}_{k}^{{-}{\gamma}}}}\right) \tag{1} \] where ${g}{(}\,{\cdot}\,{)}$ and ${h}{(}\,{\cdot}\,{)}$ are exponent functions with a base of e and 2, respectively; ${\gamma}$ is the path loss exponent; and a and b are related to the allowed time window ${\tau}$ and the local epochs, respectively. As fk increases, the probability gets higher, while increasing sk and dk leads to a lower probability.
Because vehicles are equipped with various sensors and hardware, they collect different amounts of data with varying computation capacities, which leads to distinct contributions to the global model. Thus, to utilize the data in a communication-efficient way, the edge server should select participants with larger data sizes and higher computation capacities. To evaluate vehicular data and system differences, we define the data-aware probability Us as \[{U}^{{s}} = \frac{{s}_{k}}{{(}{1}{-}{\beta}{)}^{\text{m}({f}_{k})}} \tag{2} \] where the control parameter ${\beta}\,{\in}\,{(}{0},{1}{)}$ balances the effects of data sizes and computation capacities, with a higher ${\beta}$ indicating an increased impact of computation capacities on the probability. Moreover, the normalized function ${m}{(}\,{\cdot}\,{)}$ calculates relative computation capacity. Note that ${U}^{{s}}{(}\,{\cdot}\,{)}$ becomes larger as sk and fk increase, which aligns with our expectations, as larger data sizes and higher computation capacities result in greater contributions. Note that there is a tradeoff between Um and Us with respect to the data size sk. In the “Performance Evaluations” section, we extensively explore the relationships between the effects of mobility and data contributions on the performance of FL.
Overall, we propose a joint probability to measure the likelihood of vehicle k to be selected \[{U}_{k} = {\alpha}{U}^{{m}} + {(}{1}{-}{\alpha}{)}{U}^{{s}} \tag{3} \] where the scaling coefficient ${\alpha}\,{\in}\,{(}{0},{1}{)}$ balances the influence of mobility and vehicle specificity. Hence, our selection scheme favors vehicles that are more likely to finish the FL training within the given time duration and contribute significantly to the global model.
Furthermore, instead of picking vehicles with top-k highest selection probability in a deterministic manner, we allow a confidence interval c for sampling. We admit vehicles whose selection probability is greater than the $\text{c}\%$ of the top ${((}{1}{-}{\epsilon}{)}\,{\times}\,{K}{)}$-th participant to participate in this training round, where K is defined as the total number of vehicles within the edge server’s coverage. The specific workflow is shown in Figure 3. Therefore, the computational complexity of the proposed algorithm lies mainly in sorting the vehicles according to their selection probability using the heapsort algorithm, whose time complexity is ${O}{(}{K}\log{(}{K}{))}$. As a result, the overall time complexity of our selection scheme is ${O}{(}{K}\log{(}{K}{))}$.
Figure 3 The mobility- and data-aware vehicle selection scheme workflow.
To evaluate the performance of our proposed selection algorithm, we conduct a series of simulations with the parameters outlined in Table 1. We employ two common image classification datasets: Modified National Institute of Standards and Technology (MNIST) and Fashion-MNIST (referred to as FMNIST). In addition, we apply a multilayer perceptron (MLP) model with two hidden layers, and a convolutional neural network (CNN) model with four layers, consisting of two convolutional layers and two fully connected layers. PyTorch is leveraged as our ML framework.
Table 1 Simulation parameters.
We apply two baselines for comparisons, namely, FedAvg, which selects vehicles at random, and the data-based greedy selection (GS) method, where the vehicles with more data are selected in order of priority.
Figure 4 depicts the loss function values over the MNIST and FMNIST datasets as the training step varies, using the CNN and MLP models in (a) and (b), respectively. Figure 4(a) and (b) indicates that all methods converge with an increasing training step. It can be found that the values of the loss function in our proposed algorithm and in the GS method are lower than those of FedAvg for both datasets and for both models. This is because FedAvg allows vehicles with various data sources, hardware conditions, and distances to participate in collaborative learning, which leads to a few selected vehicles making fewer contributions with longer latency, or to failing to complete the FL training in time, ultimately wasting the allocated wireless resources. We can also observe that our proposed method achieves a lower loss than the GS method. Although vehicles with higher contributions in terms of data size and computation capacity are prioritized in GS, they may not finish the training in time due to lower mobility-aware probability as poor wireless conditions and large data size enhance communication overhead. Additionally, vehicles in GS at different distances from the edge server can exit its coverage, interrupting local training and reducing the number of participating vehicles. These comparisons convincingly demonstrate that our proposed method achieves the best convergence performance.
Figure 4 Value of the loss function versus the training step number. (a) Training with a CNN model. (b) Training with an MLP model.
Figure 5 shows the total latency changes as the number of training rounds varies. From this figure, it is found that our proposed selection algorithm achieves the lowest latency for both the CNN and MLP models on the MNIST and FMNIST datasets. This is because the edge server in the FedAvg and GS schemes disregards vehicle mobility and dynamic wireless channel conditions. Thus, the selected vehicles may spend a long time uploading local parameters, and they might be unable to complete the assigned task within the time window, which would increase the straggler effect, that is, the edge server has to wait for the slowest vehicle in each round. Furthermore, the edge server in FedAvg does not consider the varying computation capacity of the selected vehicles, increasing computation and training latency.
Figure 5 Total latency versus the training step number. (a) Training with a CNN model. (b) Training with an MLP model.
Driven by FL, recent years have witnessed the rapid advancement of VEC. Although numerous works have investigated the enabling technologies presented in the “Network Architecture for FL-Assisted VEC” section, there remain several essential open research directions for exploration.
As mentioned in the “Heterogeneous Resource Allocation” section, there is an emerging trend to investigate the tradeoff among energy consumption, transmission and computation latency, convergence time, and learning accuracy by joint resource allocation. However, it can be found that most of the existing works concentrate on the joint optimization of computation and communication resources utilizing the information of time and frequency domains. Nevertheless, due to its mobile nature and stringent latency requirements, there is an urgent need for investigating multidomain resource optimization across not only time and frequency domains but also data and space domains for FL-assisted VEC. Toward this aim, data-driven resource allocation schemes should be developed, where vehicular data features and vehicles’ heterogeneity and similarity are explored and utilized to further execute analysis, prediction, and policy decisions. Moreover, the vehicle’s location and velocity are also important factors when designing resource allocation to improve FL performance. For example, vehicles at a similar location and higher data similarity can be classified into a cluster, and thus, some may contribute fewer computation and communication resources. Thus, a proper design that jointly considers all these domains can help improve learning accuracy at a lower cost of resource use, such as CPU frequency, energy consumption, local learning time, and transmission spectrum.
As one can see, FL performance depends heavily on the involved vehicles and their local data and computing resources. In addition, limited radio resources are a bottleneck for model performance as the updated parameters are delivered over a wireless medium and aggregated at the edge server. In this context, the design of incentive schemes is critical for promoting FL-assisted VEC, especially when onboard computation resources and radio resources are rare and road conditions and safety are highly valued.
To date, aside from the schemes on motivating the device to join the FL process, the impact of wireless transmissions, like interference (barely mentioned in existing works), should also be seriously considered. For example, when designing the incentive scheme, the transmission interference to other users should be considered as a cost, in addition to, e.g., CPU frequency, spectrum, and energy costs. Different frameworks based on game theory, contract theory, and auction theory can be advocated for in the design of FL incentives in VEC. Another research direction is to design an incentive mechanism to adapt to different learning algorithms in different VEC scenarios. Apparently, there is a tradeoff between computation and communication costs. Consequently, adapting features of different learning models and algorithms to the computation and communication resources uses, as well as the number of participating vehicles, will be a challenging research problem to investigate.
FL performance suffers significantly from the straggler effect. The client selection problem is more severe in VEC scenarios when selecting vehicles to participate in the FL process, as we have presented. It is crucial to optimize the vehicle selection feature to adapt to such a heterogeneous and mobile environment with strict latency requirements. As in VEC, FL often involves road transportation-related tasks.
Therefore, one research topic is to design vehicle selection schemes based on the urgency of the local data and task, location of the vehicles, and heterogeneity of the computation and communication resources. Moreover, mobility of the vehicle also plays a significant role in the design of the selection algorithm. Seamless connectivity of vehicles with the edge server during the training phase must be maintained. In addition, there is a higher chance in FL-assisted VEC that the client may be moving out of the connection area of the edge server before it completes local data training. Therefore, (possible artificial intelligence-based) mobility prediction schemes should be investigated to ensure the connectivity of vehicles during FL training. Moreover, it is also possible to study data similarity and the social relationship of different vehicles so as to improve the efficiency of resource reuse and the selection process.
In addition to the communication bottleneck, FL requires the client to share local data with the edge computing server for global computation aggregation. Although FL allows for uploading training results without the need for raw data, the current FL process still faces security and privacy attacks by disclosures or leakages at both the data and content levels. When implementing FL-assisted VEC, privacy and security concerns become critical as the uploaded data from vehicles usually contain local information, velocity, some unique IDs, and possible trajectory, which may lead to further threats to road safety and potential financial loss. Thus, the privacy-preserving and protection mechanisms that use computation capacity at the edge server should be developed using a compromise of costs of computation and radio resources. Furthermore, lightweight security schemes are urgently required to cope with the stringent latency in VEC. Additionally, to prevent a loss of accuracy, artificial noise can be added to training and communication phases according to data features. To further enhance security, an exchange of learning model updates via blockchain is considered a potential solution, where it is necessary to investigate novel consensus algorithms with low latency, which can be tailored for FL in VEC.
In this article, we reviewed and presented the concept of FL-assisted VEC. Its mobile and distributed nature make the operation of FL in large-scale VEC confront several problems. To address the problems, we presented a general framework for FL-assisted VEC, which includes several enabling technologies to advance the development of FL. Moreover, we proposed a mobility- and data-aware vehicle selection algorithm accordingly. Performance evaluations were conducted to illustrate benefits of the proposed architecture and algorithm. The recommendations for designing such a system and future research directions were introduced as well.
This work was supported in part by the National Natural Science Foundation of China under Grant 62071105 and Sichuan Natural Science Foundation under Grant 2022NSFSC0544.
Xinran Zhang (zhangxinran@std.uestc.edu.cn) is currently pursuing her Ph.D. degree with the School of Computer Science and Engineering, the University of Electronic Science and Technology of China, Chengdu 611731, China, where she received her B.S. degree in information and communication engineering in 2018. She received her M.S. degree in electrical and computer engineering from The Ohio State University in 2019. Her research interests include federated learning, vehicular network, mobile computing, and machine learning.
Jingyuan Liu (liujingyuan@std.uestc.edu.cn) is currently pursuing her M.S. degree with the School of Computer Science and Engineering, the University of Electronic Science and Technology of China, Chengdu 611731, China, where she received her B.S. degree in 2021. Her research interests include incentive mechanism, mobile edge computing, and federated learning.
Tao Hu (202022081135@std.uestc.edu.cn) is currently pursuing his master’s degree with the School of Computer Science and Engineering, the University of Electronic Science and Technology of China, Chengdu 611731, China. His research interests include machine learning, federated learning, and mobile computing.
Zheng Chang (zheng.chang@jyu.fi) is a professor with School of Computer Science and Engineering, the University of Electronic Science and Technology of China (UESTC), China and Faculty of Information Technology, University of Jyväskylä, FIN-40014, Jyväskylä, Finland. He serves as an editor for IEEE Wireless Communications Letters, Springer Wireless Networks, and International Journal of Distributed Sensor Networks, and a guest editor for IEEE Network, IEEE Wireless Communications, and IEEE Communications Magazine. He is a Senior Member of IEEE.
Yanru Zhang (yanruzhang@uestc.edu.cn) is a professor with the Shenzhen Institute for Advanced Study and School of Computer Science and Engineering, the University of Electronic Science and Technology of China (UESTC), Chengdu 611731, China. She received her B.S. degree in electronic engineering from UESTC in 2012 and her Ph.D. degree from the Department of Electrical and Computer Engineering, the University of Houston in 2016. She is a Member of IEEE.
Geyong Min (g.min@exeter.ac.uk) is a professor of high-performance computing and networking in the Department of Computer Science, the College of Engineering, Mathematics and Physical Sciences, the University of Exeter, EX4 4QF Exeter, U.K. He received his Ph.D. degree in computing science from the University of Glasgow in 2003. His research interests include computer networks, wireless communications, and ubiquitous computing. He is a Member of IEEE.
[1] Z. Zhou, H. Yu, C. Xu, Z. Chang, S. Mumtaz, and J. Rodriguez, “BEGIN: Big data enabled energy-efficient vehicular edge computing,” IEEE Commun. Mag., vol. 56, no. 12, pp. 82–89, Dec. 2018, doi: 10.1109/MCOM.2018.1700910.
[2] J. Feng, Z. Liu, C. Wu, and Y. Ji, “Mobile edge computing for the internet of vehicles: Offloading framework and job scheduling,” IEEE Veh. Technol. Mag., vol. 14, no. 1, pp. 28–36, Mar. 2019, doi: 10.1109/MVT.2018.2879647.
[3] L. Liu, C. Chen, Q. Pei, S. Maharjan, and Y. Zhang, “Vehicular edge computing and networking: A survey,” Mobile Netw. Appl, vol. 26, no. 3, pp. 1145–1168, Jun. 2021, doi: 10.1007/s11036-020-01624-1.
[4] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Proc. Artif. Intell. Statist., Apr. 2017, pp. 1273–1282.
[5] J. Posner, L. Tseng, M. Aloqaily, and Y. Jararweh, “Federated learning in vehicular networks: Opportunities and solutions,” IEEE Netw., vol. 35, no. 2, pp. 152–159, Mar./Apr. 2021, doi: 10.1109/MNET.011.2000430.
[6] D. Ye, R. Yu, M. Pan, and Z. Han, “Federated learning in vehicular edge computing: A selective model aggregation approach,” IEEE Access, vol. 8, pp. 23,920–23,935, 2020, doi: 10.1109/ACCESS.2020.2968399.
[7] S. Liu, J. Yu, X. Deng, and S. Wan, “FedCPF: An efficient-communication federated learning approach for vehicular edge computing in 6G communication networks,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 2, pp. 1616–1629, Feb. 2022, doi: 10.1109/TITS.2021.3099368.
[8] C. Li, Y. Zhang, and Y. Luo, “A federated learning-based edge caching approach for mobile edge computing-enabled intelligent connected vehicles,” IEEE Trans. Intell. Transp. Syst., vol. 24, no. 3, pp. 3360–3369, Mar. 2023, doi: 10.1109/TITS.2022.3224395.
[9] G. Tian, Y. Ren, C. Pan, Z. Zhou, and X. Wang, “Asynchronous federated learning empowered computation offloading in collaborative vehicular networks,” in Proc. IEEE Wireless Commun. Netw. Conf. (WCNC), Austin, TX, USA, 2022, pp. 315–320, doi: 10.1109/WCNC51071.2022.9771736.
[10] X. Li, L. Lu, W. Ni, A. Jamalipour, D. Zhang, and H. Du, “Federated multi-agent deep reinforcement learning for resource allocation of vehicle-to-vehicle communications,” IEEE Trans. Veh. Technol., vol. 71, no. 8, pp. 8810–8824, Aug. 2022, doi: 10.1109/TVT.2022.3173057.
[11] T. Zeng, O. Semiariy, M. Chen, W. Saad, and M. Bennis, “Federated learning on the road autonomous controller design for connected and autonomous vehicles,” IEEE Trans. Wireless Commun., vol. 21, no. 12, pp. 10,407–10,423, Dec. 2022, doi: 10.1109/TWC.2022.3183996.
[12] Y. Li, Y. Guo, M. Alazab, S. Chen, C. Shen, and K. Yu, “Joint optimal quantization and aggregation of federated learning scheme in VANETs,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 10, pp. 19,852–19,863, Oct. 2022, doi: 10.1109/TITS.2022.3145823.
[13] S. Abdulrahman et al., “FedMCCS: Multicriteria client selection model for optimal IoT federated learning,” IEEE Internet Things J., vol. 8, no. 6, pp. 4723–4735, Mar. 2021, doi: 10.1109/JIOT.2020.3028742.
[14] T. Liu, B. Di, P. An, and L. Song, “Privacy-preserving incentive mechanism design for federated cloud-edge learning,” IEEE Trans. Netw. Sci. Eng., vol. 8, no. 3, pp. 2588–2600, Jul./Sep. 2021, doi: 10.1109/TNSE.2021.3100096.
[15] S. R. Pandey, N. H. Tran, M. Bennis, Y. K. Tun, A. Manzoor, and C. S. Hong, “A crowdsourcing framework for on-device federated learning,” IEEE Trans. Wireless Commun., vol. 19, no. 5, pp. 3241–3256, May 2020, doi: 10.1109/TWC.2020.2971981.
Digital Object Identifier 10.1109/MVT.2023.3297793