dc.contributor.author |
Hareem, Nisha |
|
dc.date.accessioned |
2020-11-02T07:24:09Z |
|
dc.date.available |
2020-11-02T07:24:09Z |
|
dc.date.issued |
2020 |
|
dc.identifier.other |
203888 |
|
dc.identifier.uri |
http://10.250.8.41:8080/xmlui/handle/123456789/8224 |
|
dc.description |
Supervisor : Dr. Kashif Imran |
en_US |
dc.description.abstract |
In electricity markets, participants undertake distributed decision making under dynamic environment. Agent based modeling and simulation is suitable for analysis of such distributed decision making. Self-centric GENCO agents have a chance to learn from results of day-ahead auction and adjust their bids for the next day. Reactive reinforcement learning algorithms have capabilities to learn optimal response of GENCO agents to dynamic conditions of day-ahead auction. This paper explores change in convergence behavior of learning algorithm when market environment was made more dynamic, by introducing stochastic load profiles and variable generation cost coefficients, to model load variability and fuel price changes in real world markets. The results show that as variation in load profile increases, algorithm generally take more days to converge but average number of simulation runs that converge remain within a small range. As fuel price variation increases, convergence becomes tough and gets delayed but a greater number of simulation runs achieve convergence. |
en_US |
dc.language.iso |
en_US |
en_US |
dc.publisher |
U.S.-Pakistan Center for Advanced Studies in Energy (USPCAS-E), NUST |
en_US |
dc.relation.ispartofseries |
TH-217 |
|
dc.subject |
reinforcement learning |
en_US |
dc.subject |
electricity market |
en_US |
dc.subject |
convergence rate |
en_US |
dc.subject |
strategic bidding |
en_US |
dc.subject |
MS-EEP |
en_US |
dc.title |
Effect of load and fuel price variation on GENCO agent in day ahead auction / |
en_US |
dc.type |
Thesis |
en_US |