• CSCD核心库收录期刊
  • 中文核心期刊
  • 中国科技核心期刊

电力建设 ›› 2020, Vol. 41 ›› Issue (3): 71-78.doi: 10.3969/j.issn.1000-7229.2020.03.009

• 主动配电系统源-网-荷-储交互运行 • 上一篇    下一篇

含储能系统的配电网电压调节深度强化学习算法

史景坚1, 周文涛1,张宁1,陈桥1,刘金涛1,曹振博1,陈懿1,宋航2,刘友波2   

  1. 1.国网北京市电力公司朝阳供电公司, 北京市 100020;2.四川大学电气工程学院,成都市 610065
  • 出版日期:2020-03-01
  • 作者简介:史景坚(1972),男,硕士研究生,高级工程师,主要研究方向为电网运行维护、营销服务及储能等; 周文涛(1979),男,硕士研究生,高级工程师,主要研究方向为电网运营、检修维护等; 张宁(1984),男,硕士研究生,高级工程师,主要研究方向为配电网运维检修、分布式储能; 陈桥(1974),男,高级工程师,主要从事综合能源服务工作; 刘金涛(1981),男,高级工程师,高级经济师,主要从事电力系统主配网规划和企业管理工作; 曹振博(1984),男,高级工程师,主要从事配电网工程管理,电力系统稳定分析工作; 陈懿(1985),女,工程师,主要从事创新管理,指标管理,公司经营分析等工作; 宋航 (1995),男,硕士研究生,研究方向为电力系统中的大数据技术; 刘友波(1983),男,博士,副教授,研究方向为主动配电网规划与运行、复杂电力系统连锁故障风险评估及防御。
  • 基金资助:
    国家电网公司科技项目(52020318003X)

Deep Reinforcement Learning Algorithm of Voltage Regulation in Distribution Network with Energy Storage System

SHI Jingjian1, ZHOU Wentao1, ZHANG Ning1, CHEN Qiao1, LIU Jintao1,CAO Zhenbo1,  CHEN Yi1, SONG Hang2, LIU Youbo2   

  1. 1.Chaoyang Power Supply Company, State Grid Beijing Electric Power Co., Ltd., Beijing 100020, China; 2. College of Electrical Engineering, Sichuan University, Chengdu 610065, China
  • Online:2020-03-01
  • Supported by:
    This work is supported by State Grid Corporation of China Research Program (No.52020318003X).

摘要: 通过在配电网末端接入用于系统调压等辅助服务的储能系统,能有效应对可再生能源的高度间歇性以及负荷需求波动导致的系统电压运行水平问题。文章将电池储能的运行建模为马尔可夫决策过程,考虑其后续调控能力,提出了一种含储能系统的配电网电压调节深度强化学习(deep reinforcement learning,DRL)算法,通过内嵌一个Q深度神经网络来逼近储能最佳动作价值,以解决状态空间过大的问题。储能荷电状态(state of charge,SOC)、可再生能源预测出力以及负荷水平组成状态特征向量作为Q网络的输入,输出提高电压运行水平的最优离散化充放电动作,并通过回放策略来训练。相比传统方法,所提方案基于学习而无需显式的不确定性模型,且计算效率较高。最后在TensorFlow架构下利用MATPOWER对IEEE 33节点配网系统进行了分析,证明了所提出方法的有效性。

关键词: 配电网, 电池储能, 深度强化学习(DRL), 电压运行水平

Abstract: Energy storage system connected with the end of distribution network, which is used for auxiliary services such as system voltage regulation, can effectively deal with the problem of voltage fluctuation caused by intermittent distributed renewable energies and the fluctuation of load demand. In this paper, the operation of energy-storage battery is modeled as a Markov Decision Making process. Considering its subsequent regulation ability, an intelligent control strategy based on deep reinforcement learning (DRL) is proposed. By embedding a Q deep neural network to approach the optimal action value, the problem of too large state space can be solved. The state vector composed of the state of charge (SOC), the predicted output of renewable energy and the load level is used as the input of Q network, and the optimal discrete charge and discharge action is output, which is trained by replay strategy. Compared with the traditional method, the proposed method is based on learning without explicit uncertainty model, and the calculation efficiency is high. Finally, the IEEE 33-node distribution network system is analyzed by using MATPOWER in TtensorFlow, and the effectiveness of the proposed method is proved.

Key words: distribution network, battery energy storage, deep reinforcement learning(DRL), voltage operation level

中图分类号: