Researchers from Greece have developed a PV forecasting technique for prosumer schemes using federated learning, a machine learning method that sends local model updates to a central server for correction. Their simulations show surprising results compared to centralized forecasting.
Image: Blue Coat Photos, Flickr, CC BY-SA 2.0
Scientists from Greece’s National National Technical University of Athens have proposed a novel PV forecasting technique that protects prosumer privacy. Efficient prosumer schemes rely on accurate solar production forecasting models, which require extensive data, making privacy and utility trade-offs essential. The researchers’ approach to balancing this trade-off is based on federated learning (FL).
“The FL process starts with a global model shared with all devices. Each device trains the model locally and sends updates to a central server, where they are aggregated to improve the model,” the academics said. “This updated model is then distributed back to the devices for further training. The FL cycle is iterated multiple times until the global model achieves the desired optimal accuracy.”
The team’s model runs locally on each machine and includes a long short-term memory (LSTM) architecture, a dropout unit, and two fully connected dense layers. The LSTM handles sequential data, while the dropout unit reduces overfitting, and the dense layers aid in making final predictions.
The model also uses hyperparameters to tune local LSTM models and cluster similar clients on the central server. These hyperparameters, set before training begins, govern the machine learning model’s training process.
Other models
“The dataset under examination is sourced from the electricity grid of Terni, Italy, comprising data from 30 small-scale electricity prosumers who utilize photovoltaic systems for energy generation,” the group explained. “Following normalization, we divide the dataset into two subsets: a training set for model training and a testing set for evaluating the model’s performance on unseen data. This division adheres to an 80-20 split, with data from January 2015 to December 2017 designated for training and data spanning from January 2018 to December 2019 allocated for testing.”
The researchers then compared the FL-LSTM model on the same dataset against several learning methods. The first was localized learning, which operates in a fully private, localized environment. The second was centralized learning, which typically offers higher accuracy but sacrifices privacy. The third model was FL enhanced with differential privacy (DP) to minimize the chance of identifying individual contributions, using noise multipliers set at 0.2, 0.25, 0.3, or 0.4.
“To assess the performance of the models, two key metrics are utilized: mean absolute error (MAE) and root mean square error (RMSE),” the group explained. “The selection of MAE allows for a comprehensive overview of the error margins of our models, particularly due to its robustness against outliers – a notable characteristic of our dataset. Conversely, RMSE emphasizes sensitivity to larger errors, which is crucial for evaluating the accuracy of generation forecasting, as it highlights the impact of substantial deviations more than MAE.”
The results showed that the centralized model performed best, with an MAE of 0.00960 and RMSE of 0.01687. The FL model had an MAE of 0.01993 and RMSE of 0.02872. The FL-DP model with a noise multiplier of 0.2 recorded an MAE of 0.01857 and RMSE of 0.02669. The localized model had an MAE of 0.02436 and RMSE of 0.04679, while the FL-DP model with a noise multiplier of 0.25 showed an MAE of 0.02651 and RMSE of 0.03375. Results for noise multipliers of 0.3 and 0.4 were not provided.
“In the search for a noise level that would provide similar performance to the non-DP FL implementation we encountered an intriguing anomaly. The optimal noise-to-performance ratio was observed at a noise multiplier of 0.2, which unexpectedly yielded better results than FL,” the group noted. “Our experiments with noise multipliers higher than 0.2 demonstrated the anticipated degradation in predictive accuracy with the 0.4 multiplier making the model unable to converge.”
The group said that the “main constraint involved the limited size of the dataset concerning the number of participating clients. This study serves as a baseline; adding more prosumers over time would certainly increase the performance of FL and FL-DP. With that in mind, our results indicate that for smaller datasets with few participating clients, centralized learning outperforms FL in terms of accuracy, even though both approaches leverage the collective data available. Despite this, FL offers benefits regarding privacy and communication costs.”
They presented their results in “Empowering federated learning techniques for privacy-preserving PV forecasting,” which was recently published in Energy Reports.
This content is protected by copyright and may not be reused. If you want to cooperate with us and would like to reuse some of our content, please contact: editors@pv-magazine.com.
Source from pv magazine
Disclaimer: The information set forth above is provided by pv-magazine.com independently of Alibaba.com. Alibaba.com makes no representation and warranties as to the quality and reliability of the seller and products. Alibaba.com expressly disclaims any liability for breaches pertaining to the copyright of content.