Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documentation for Different Environments #44

Open
jkterry1 opened this issue Sep 7, 2021 · 12 comments
Open

Documentation for Different Environments #44

jkterry1 opened this issue Sep 7, 2021 · 12 comments

Comments

@jkterry1
Copy link
Contributor

jkterry1 commented Sep 7, 2021

Hey,

I was planning to explore using a handful of these environments as a part of my research. However, unless I'm missing something, there's no explanation or visuals of the mechanics or behaviors of the different environments/maps? Is that the case, and if so would you be willing to take an hour to add it to the readme or something? It'd be super helpful for those potentially interested in your environment.

@LucasAlegre
Copy link
Owner

Hi,

I'm glad that you are interested in using sumo-rl!
Sure, I could definitely do that. Do you any anything specific in mind? Maybe describe the default definition of states and rewards?
Notice that SumoEnvironment is generic and can be instantiated with any .net and .rou SUMO files. Also, you can visualize the networks directly on SUMO.

@jkterry1
Copy link
Contributor Author

jkterry1 commented Sep 7, 2021

"Maybe describe the default definition of states and rewards?"
That, plus action and observation spaces and images of what each look like would work, ya :)

@LucasAlegre
Copy link
Owner

I just updated the readme with the basic definitions, but I plan to add more details later!

@jkterry1
Copy link
Contributor Author

Hey, I just sat down and look at this. I've used who's fairly experienced in the RL (and I wanted to use these environments as part of a set of of many to test a general MARL algorithm I've been working on), but I'm not very experienced with traffic control/sumo so I have a few questions after reading:

-What does phase_one_hot mean?
-What does lane_1_queue mean?
-What does green phase mean?
-Could you please document the action space too?
-Could you elaborate a bit on why that specific reward function makes sense is the default? Is that the standard in the literature?
-Also, your new links to TrafficSignal are dead

@LucasAlegre
Copy link
Owner

Hey, I believe I have answered these question in this commit f0b387f. (Also fixed the dead links)

Regarding the reward function, there is not really a standard in the literature.
Change in delay/waiting time is what in my experience worked the best. I can point you to some papers that use this reward:

  • Genders W, Razavi S. 2018. Evaluating reinforcement learning state representations for adaptive traffic signal control. Procedia Computer Science 130:26-33
  • Quantifying the Impact of Non-Stationarity in Reinforcement Learning-Based Traffic Signal Control
    LN Alegre, ALC Bazzan, BC da Silva, PeerJ Computer Science, 2021.
  • L. N. Alegre, T. Ziemke and A. L. C. Bazzan, "Using Reinforcement Learning to Control Traffic Signals in a Real-World Scenario: An Approach Based on Linear Function Approximation," in IEEE Transactions on Intelligent Transportation Systems, doi: 10.1109/TITS.2021.3091014.

I have seen many papers using Pressure as reward (but I didn't get better results with this):

  • Hua Wei, Chacha Chen, Guanjie Zheng, Kan Wu, Vikash Gayah, Kai Xu, and Zhenhui Li. 2019. PressLight: Learning Max Pressure Control to Coordinate Traffic Signals in Arterial Network. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD '19). Association for Computing Machinery, New York, NY, USA, 1290–1298. DOI:https://doi.org/10.1145/3292500.3330949

@jkterry1
Copy link
Contributor Author

Hey thanks a ton for that!

A few more questions:

  • You have a sentence "Obs: Every time a phase change occurs, the next phase is preeceded by a yellow phase lasting yellow_time seconds.". Either that's in the wrong section or I'm very confused.
  • I'm sure this is simply due to my unfamiliarity, but what's a "green phase"?
  • Would you also be willing to also clarify what the different built in nets are like in the readme? That'd also be super helpful

@LucasAlegre
Copy link
Owner

LucasAlegre commented Sep 13, 2021

Hey thanks a ton for that!

A few more questions:

  • You have a sentence "Obs: Every time a phase change occurs, the next phase is preeceded by a yellow phase lasting yellow_time seconds.". Either that's in the wrong section or I'm very confused.

Ops, this "Obs:" means "Ps:" :P This means that when your action changes the phase, the env sets a yellow phase before actually setting the phase selected by the agent's action.

  • I'm sure this is simply due to my unfamiliarity, but what's a "green phase"?

The nomenclature for traffic signal control can be a bit confusing. By green phase I mean a phase configuration presenting green (permissive) movements. The 4 actions in the readme are examples of 4 green phases.

  • Would you also be willing to also clarify what the different built in nets are like in the readme? That'd also be super helpful

Sure! I also intended to add more networks to the repository.

@ahphan
Copy link

ahphan commented Sep 20, 2021

Hello,

I am really new to SUMO, but is there a way to deploy a trained agent with sumo-gui?

I was able to run the example experiments/ql_2way-single-intersection.py and plot the results. Then I tried to run "python experiments/ql_2way-single-intersection.py -gui", which provided visualizations in sumo-gui, but the terminal window wasn't updating the step number (usually increments to 100,000) so I'm not sure if this is actually training and visualizing at the same time.

In summary, I would like to know if I can save the trained agent, deploy it in an environment, and visualize it in sumo-gui. Also, when I use the "-gui" argument, is this still training the agent as it normally would if I ran "python experiments/ql_2way-single-intersection.py", but it just doesn't update the step number?

I really appreciate your contributions, thank you!

@LucasAlegre
Copy link
Owner

Hello,

I am really new to SUMO, but is there a way to deploy a trained agent with sumo-gui?

I was able to run the example experiments/ql_2way-single-intersection.py and plot the results. Then I tried to run "python experiments/ql_2way-single-intersection.py -gui", which provided visualizations in sumo-gui, but the terminal window wasn't updating the step number (usually increments to 100,000) so I'm not sure if this is actually training and visualizing at the same time.

In summary, I would like to know if I can save the trained agent, deploy it in an environment, and visualize it in sumo-gui. Also, when I use the "-gui" argument, is this still training the agent as it normally would if I ran "python experiments/ql_2way-single-intersection.py", but it just doesn't update the step number?

I really appreciate your contributions, thank you!

Hi,

Using -gui only activates the SUMO GUI, there is no effect on the training procedure.
Notice that training is part of the algorithm (not the environment), so you can use any algorithm you want, save the model and then run again with sumo-gui to visualize it.
In the ql example I did not implement a method to save the agent q-tables, but that should be easy to do.

@LucasAlegre
Copy link
Owner

@jkterry1 I just added network and route files from RESCO (check the readme). Basically, RESCO is a set of benchmarks for traffic signal control that was built on top of SUMO-RL. In their paper you can find results for different algorithms.
Later this week I'll try to add more documentation and examples for these networks.

@jkterry1
Copy link
Contributor Author

jkterry1 commented Oct 5, 2021

Hey, it's been a week so I'm just following up on this :)

@LucasAlegre
Copy link
Owner

LucasAlegre commented Oct 6, 2021

Hey, it's been a week so I'm just following up on this :)

Hey, I have just added an API to instantiate a few environments in the file https://github.com/LucasAlegre/sumo-rl/blob/master/sumo_rl/environment/resco_envs.py !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants