An online optimization framework with policy-assisted graph reinforcement learning (PAGRL) is proposed for real-time economic dispatch (RTED). In this framework, RTED is presented as a sequential decision problem formulated by Markov decision process (MDP). PAGRL employs a graph convolutional network to extract grid operation features containing topological information and then an agent that performs power dispatch is trained through proximal policy optimization. Moreover, the adaptiveness of agent to more hard-to-learn scenarios is enhanced by difficulty sampling, and policy-assisted action postprocessing mechanism is designed to reduce search space and improve decision quality, which provides a general performance enhancement scheme for reinforcement learning in power systems applications. Comparative studies on modified IEEE 118-bus system and real-world provincial grid demonstrate the flexible and reliable performance of the proposed approach for RTED.