Welcome to the FreeHoopRL repository! This project focuses on a 2D basketball simulation based on the DQN (Deep Q-Network) algorithm. It serves as a platform for understanding reinforcement learning principles, specifically in the context of basketball shooting mechanics. The simulation does not account for air resistance, making it ideal for educational purposes.
- Project Overview
- Installation
- Usage
- Algorithm Explanation
- Topics Covered
- Contributing
- License
- Contact
FreeHoopRL is designed to simulate a basketball shooting scenario using the DQN algorithm. This project provides an interactive environment where users can experiment with different strategies and observe how the DQN learns to shoot hoops over time.
The simulation simplifies the physics involved by excluding air resistance, which allows for a more straightforward understanding of how reinforcement learning can be applied in game-like scenarios.
To get started with FreeHoopRL, follow these steps:
-
Clone the Repository
git clone https://github.com/yassinhawari/FreeHoopRL.git cd FreeHoopRL -
Install Dependencies Make sure you have Python 3.6 or higher installed. Then, install the required packages:
pip install -r requirements.txt
-
Download Releases You can find the latest release here. Download the appropriate file and execute it to run the simulation.
After installation, you can start the simulation by running the following command:
python main.pyThis will launch the 2D basketball simulation where you can observe the DQN agent attempting to make successful shots. The simulation provides visual feedback on the agent's performance and allows for real-time adjustments to parameters.
Deep Q-Network (DQN) is a reinforcement learning algorithm that combines Q-learning with deep neural networks. It enables agents to learn optimal actions in environments with high-dimensional state spaces.
- State Representation: The state of the environment is represented as an input to the neural network.
- Action Selection: The agent selects actions based on the Q-values predicted by the neural network.
- Reward System: The agent receives rewards based on the outcomes of its actions, allowing it to learn over time.
- Experience Replay: DQN uses a replay buffer to store experiences, which helps stabilize training.
- Neural Network: The core of the DQN, which approximates the Q-values.
- Target Network: A separate network that stabilizes learning by providing consistent Q-value targets.
- Epsilon-Greedy Strategy: This strategy balances exploration and exploitation, allowing the agent to discover new strategies while also refining known ones.
This project touches on various topics related to reinforcement learning and machine learning algorithms. Here are some key areas:
- DQN: Understanding the core principles of Deep Q-Networks.
- DQN Agents: Exploring different types of agents that can be implemented.
- Machine Learning: A broader look at how machine learning principles apply to this simulation.
- Reinforcement Learning: Insights into how agents learn from their environment.
We welcome contributions to improve FreeHoopRL. If you have ideas for enhancements or bug fixes, please follow these steps:
- Fork the repository.
- Create a new branch (
git checkout -b feature-branch). - Make your changes and commit them (
git commit -m 'Add new feature'). - Push to your branch (
git push origin feature-branch). - Create a pull request.
This project is licensed under the MIT License. See the LICENSE file for details.
For questions or feedback, please reach out via GitHub issues or contact the repository owner directly.
Feel free to explore the repository, experiment with the code, and dive deeper into the fascinating world of reinforcement learning! For more updates and releases, check the Releases section.