When somebody who is not a committer creates a new bug report for a vulnerability, it would be handy for them to be able to mark it as "committers-only".
As we move more into the runtime space, having this ability will become more critical. Controlling the time of the dissemination of vulnerabilities is important in this space and depending on the Security Team to create all of the bug reports from folks outside of our community isn't a fair expectation.
Having the Security Team create the bug reports often requires a bit of a dance to get the original reporter copied on the record. If that user hasn't already connected to our Bugzilla instance (which seems to be a very common case), when when create the bug, we need them to log in, and then tell us so that we can add them in CC (since they can't access the record to do it themselves). This makes us look goofy.
Based on my understanding of how bugzilla groups function if you can interact with the group, then you are a member of it, which in this case would make every user part of the security team so these reports would essentially become public.
It sounds like we need to decide how we want to handle security issues going forward. Perhaps we should re-convene the security calls for a while to discuss the options?
A web form is probably the simplest answer to that, but someone is still going to have to 'moderate' its operation to prevent it from being used to spam things without mercy.
More as a last resort we could consider something like hackerone, but I don't
think that's a great fit as we don't have a bounty program.
Again perhaps we should reconvene the security team calls to address this?
This bug hasn't had any activity in quite some time. Maybe the problem got resolved, was a duplicate of something else, or became less pressing for some reason - or maybe it's still relevant but just hasn't been looked at yet.
If you have further information on the current state of the bug, please add it. The information can be, for example, that the problem still occurs, that you still want the feature, that more information is needed, or that the bug is (for whatever reason) no longer relevant.
When I created this issue I was specifically focused on generalizing the Bugzilla implementation.
I'm inclined to close this (at least for now) to let the more general conversation of "what should we do now" play out: eclipsefdn/emo-team/emo#71 (moved)
I can think of at least two problems with using GitLab issues:
Users need specific permissions to mark an issue as confidential and I haven't had any success with doing this for an arbitrary user yet.
It's just weird for projects that don't use GitLab. Using Bugzilla is also weird for the same reason.
My preferences are leaning towards the introduction of a private mailing list and letting projects decide what they need to do rather than create addition infrastructure.
if the reporter want to participate: Project team add the reporter as an external contributor to the project via PMI (they will be added as a team member on gitab with the reporter role). Then either the reporter or the project team creates the confidential issue for the vulnerability. If the project team creates the issue, it needs to assign the issue to the reporter so he can read it.
Note that I did not yet dig into what permissions are required to properly use confidential merge requests.
"Only members of the of a project with the lowest role (ie guest) can create confidential issues" - do you mean "at least the lowest role"?
The fact of a need to create an account might be a roadblock for some, so an alternative by a mailing list should be possible. IMO it shouldn't be a major difficulty.
The fact of adding security researches as reporters adds them to ALL security issues, not only to the one they have added. This could be problematic. Image an attack when a malicious reporter sends a request to only be added to see another report that is being worked on.
Please note also that a confidential merge request needs to be done from a private fork. I tend to prefer to use private repo to work on the fix, and then clean it up and send by a traditional public MR. The actual security review can be done on the private fork and we do not need to mention the security impact in the MR discussion.
"Only members of the of a project with the lowest role (ie guest) can create confidential issues" - do you mean "at least the lowest role"?
Correct. Bad wording on my end
The fact of a need to create an account might be a roadblock for some, so an alternative by a mailing list should be possible. IMO it shouldn't be a major difficulty.
The fact of adding security researches as reporters adds them to ALL security issues, not only to the one they have added. This could be problematic. Image an attack when a malicious reporter sends a request to only be added to see another report that is being worked on.
Good catch. Then, only project team create the vulnerability ticket and the vulnerability reporter gets it assigned to him if he want to participate (assign a confidential issue to someone is the only way to grant him visibility on that issue).
Please note also that a confidential merge request needs to be done from a private fork. I tend to prefer to use private repo to work on the fix, and then clean it up and send by a traditional public MR. The actual security review can be done on the private fork and we do not need to mention the security impact in the MR discussion.
I'm not sure to understand the subtleties of your preference to use a private repo rather than using confidential merge request needs to be done from a private fork. Would you mind elaborating please?
I'm not sure to understand the subtleties of your preference to use a private repo rather than using confidential merge request needs to be done from a private fork. Would you mind elaborating please?
A little bit a question or preference here. A confidential merge requests give you a commit that lands in the main branch without a public MR. This can be suspicious (an hint for attackers). And surprising for developers. In a public merge MR someone may comment that the fix is adding a performance regression that wasn't seen before, for example. In a security issue case it will likely get the accepts very rapidly and the comments will be "for later", but this is good IMO to have the public history for such a case. It allows to address all pending issues in the open. There is a slight risk someone starts discussing possible security implications still.
I believe that the security vulnerability reporting features available on platforms such as Eclipse GitLab with https://gitlab.eclipse.org/security/vulnerability-reports, and GitHub with private vulnerability reporting, fulfill this requirement. Therefore, I am closing this issue.