A console application to reduce bugs, improve performance and improve readability of your code.
refactorcode.mp4
- Checks for any bugs and corrects them (out of bounds, performance issues, logical bugs).
- Removes commented out and unreachable code.
- Adds comments to explain existing code.
- Splits very large functions into smaller functions for better modularity.
refactorcode ./yourfileThe refactored code is displayed in the console. To specify an output file, use -o. See Options for more details.
Ensure you have npm and node.js installed on your computer: Node.js
Install the package from npm, either for the project or globally
npm install refactorcodeOR
npm install -g refactorcodeGet an API Key from here: https://ai.google.dev/aistudio

To configure your application, there are 2 options, creating a .env file or a .toml file:
Option 1: Create a .env file in your project root directory, and add the API key like this:
API_KEY=YOURAPIKEYHEREOption 2: Create a .toml file named .refactorcode.toml in your home directory, and add your API key and/or preferences:
-
Create the TOML File:
Open your terminal and run the following command to create a new TOML file in your home directory:touch ~/.refactorcode.toml -
Copy the Sample Configuration:
Next, copy the sample configuration from.refactorcode.toml.exampleinto your newly created.refactorcode.tomlfile:cp .refactorcode.toml.example ~/.refactorcode.toml -
Edit the Configuration:
Open the.refactorcode.tomlfile in your preferred text editor, and add your API key value, and any other preferences (e.g. MODEL) you need.
If you want to contribute to project or make a custom version of the library, here are the instructions:
git clone https://github.com/brokoli777/RefactorCode.gitpnpm installOR
npm installnpm linkrefactorcode examples/test.py-m or --model: Allows to specify the model
Choices:
- 1.5f (gemini-1.5-flash) (default)
- 1.5p (gemini-1.5-pro)
refactorcode examples/test.py -m 1.5p-o or --output- Allows to set the output file
refactorcode examples/test.py -o hello.py-t or --token-usage: Allows get information on the tokens used
-s or --stream Streams the response as it is received