Explain vulnerabilities with AI

  • Tier: Ultimate
  • Add-on: GitLab Duo Enterprise, GitLab Duo with Amazon Q
  • Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated

GitLab Duo Vulnerability Explanation can help you with a vulnerability by using a large language model to:

  • Summarize the vulnerability.
  • Help developers and security analysts to understand the vulnerability, how it could be exploited, and how to fix it.
  • Provide a suggested mitigation.

GitLab Duo can also automatically analyze critical and high severity SAST vulnerabilities to identify potential false positives. For more information, see SAST false positive detection.

Watch an overview

Prerequisites:

  • GitLab Duo must be enabled for the group or instance.
  • You must be a member of the project.
  • The vulnerability must be from a SAST scanner.

To explain the vulnerability:

  1. On the top bar, select Search or go to and find your project.

  2. Select Secure > Vulnerability report.

  3. Optional. To remove the default filters, select Clear ( clear ).

  4. Above the list of vulnerabilities, select the filter bar.

  5. In the dropdown list that appears, select Tool, then select all the values in the SAST category.

  6. Select outside the filter field. The vulnerability severity totals and list of matching vulnerabilities are updated.

  7. Select the SAST vulnerability you want explained.

  8. Do one of the following:

    • Select the text below the vulnerability description that reads You can also use AI by asking GitLab Duo Chat to explain this vulnerability and a suggested fix.
    • In the upper right, from the Resolve with merge request dropdown list, select Explain vulnerability, then select Explain vulnerability.
    • Open GitLab Duo Chat and use the explain a vulnerability command by typing /vulnerability_explain.

The response is shown on the right side of the page.

On GitLab.com this feature is available. By default, it is powered by the Anthropic claude-3-haiku model. GitLab cannot guarantee that the large language model produces results that are correct. Use the explanation with caution.

Data shared with third-party AI APIs for Vulnerability Explanation

The following data is shared with third-party AI APIs:

  • Vulnerability title (which might contain the filename, depending on which scanner is used).
  • Vulnerability identifiers.
  • Filename.