Actually, AI may hallucinate or miss security issues previously identified. If we have the ability to use previous results as part of the code analysis, we could identify which problems were solved and provide more hints to the LLM about which issues might still be open.
Tech details:
- Create a toggle/switch to enable historical analysis
- When loading the review, look for all issues in the results folder for a given file
- Use those issues as part of the context for the LLM
- Adjust the context size accordingly
Benefits:
- Increase the consistency of security issues reported
- Allow tracking the historical evolution of the project
** If the results are persisted in a database, read from there instead of files.