To handle vulnerabilities behind closed doors, we will need to grant access to some (private) resources to a subset of committers. To handle this concept of project's security team, we need:
Define the concept (a new project role) and election process
Implement its support in PMI and infra (Foundation DB). I guess that this could be an additive role to projects, and certain committers could have it.
This new role will have to be taken into account in github and gitlab sync tools.
I would like to take this opportunity to also request the creation of ldap groups for each role. Currently, we have 1 single group per project, which is assigned to all committers. It would be great to have 1 group per role (committer, project leads, security...).
ftr: GitHub has support for Security Managers of an organization. They can see all security alerts and also have access to security advisories. For some projects we started by adding the committers to the security managers, though that might not be ideal. Setting the security managers of an organization is also possible to specify using otterdog, there is not solution for GitLab hosted projects though.
With more usage of the Security Manager role on GitHub we likely need to go back to this discussion, define the roles and then have each project to clearly define who should have access to what, agreed.
We should also include PMC members (or a subset of them) in the security role. This probably requires an evolution of the sync script to create a PMC team in each org...
Adoptium has a requirement for this capability. We have people we wish to include in evaluation of vulnerability disclosures, advise on solutions, and contribute to disclosure records and fix implementations. These are not project leads and in fact may not all be committers on the project code (e.g. analysts). This is a project-level parallel to the current eclipsefdn-security team membership who have required rights at the foundation level.
I would not force the overlap with existing groups (committers/PMC members), but rather allow for the group of security managers to be defined at the project level in a flexible fashion to allow for managing security alerts.
For example, I would like to be able to write something like this in Otterdog:
@tellison how do you imagine to add people to that group? By a vote of the commiters? Those people might have access to issues with impact, so trust is important.
Would those people be permanent members, change from one issue to another (eg. domain experts)?
Hi @mrybczyn. Adoptium is structured as an "umbrella" project with most of the technical work happening in the associated projects of Temurin, AQAvit, Mission Control, etc. I'd expect the Adoptium vulnerability group to oversee all technical projects, and group membership decided by votes of the Adoptium PMC. The Adoptium Vulnerability Group assist in managing the report through to resolution and disclosure.
We currently handle vulnerability reports through GitHub's private reporting mechanism. That mechanism allows for individual domain-level experts to be added to a report and allow collaboration at a granular level. The domain experts would not be added to the vulnerability group for this purpose.
@wbeaton linked me to this issue in response to a query I made to him.
A project is informed about an issue under widespread embargo, with knowledge of the vulnerability being restricted to a small set of implementors worldwide. In this scenario, a project is unable to make use of the project repository to deal with the issue. A security advisory on GitHub isn't sufficient to restrict access because some amount of Eclipse staff have access to the repository for management purposes.
I don't know that there is a way around this beyond making a private fork, but since this discussion is about handling vulnerabilities behind closed doors, it is important to consider the case where information should not be exposed either directly or via backchannels to people at Eclipse.
A project is informed about an issue under widespread embargo, with knowledge of the vulnerability being limited to a select group of implementors globally. In such situations, the project repository isn't a suitable place for addressing the concern. Using a GitHub security advisory isn't adequately restrictive since some members of the Eclipse staff can access the repository for administrative reasons.
The security team at Eclipse and the GitHub organizations' owner (webmaster) are the only individuals permitted to access these advisories. IMO, trusting the webmaster is akin to trusting the system administrators at GitHub: both parties are somewhat anonymous yet likely have full visibility.
I hear your concerns about potentially breaching an embargo. But note that members of the security team are skilled experts. They know very well what can and what cannot be divulged, and they use various security measures to keep their accounts and devices safe like 2FA, FDE, BIOS protection, etc. We uphold the principle of minimum privilege: only the security manager role is conferred upon them on GitHub.
Their access to advisories serves to assist projects grappling with particular security issues, for instance, in assigning a CVE identifier or in determining the accurate CVSS rating. Many projects find these tasks complex, and our team's guidance necessitates this access. Moreover, we need to ensure that reported vulnerabilities are addressed correctly. Any lapse on part of a project in responding to a crucial report can tarnish the reputation of the Eclipse Foundation and all its projects as a collateral. We monitor vulnerability reports closely only to ensure adherence Eclipse Foundation security policy.
While I'm uncertain if there exists a workaround other than creating a private branch, considering this dialogue revolves around discreetly managing vulnerabilities, it's essential to ponder scenarios where data shouldn't be disclosed, whether directly or indirectly, to Eclipse associates.
I'd propose a thought: is it even advisable to employ GitHub for reporting such vulnerabilities? If the embargo is truly exclusive, the chosen individuals should operate on a platform wholly owned and regulated by them. Is that a feasible proposition? I doubt it.
So, I'd like to ask, how can we make sure you're confident that the information under embargo stays safe? What steps, announcements, documents, or checks would make you trust the security team and the whole Eclipse Foundation team more?
Completely fair @mbarbero, and I will freely admit a lot is going on at Eclipse that is a mystery to me and likely many other committers. For example, I didn't know you could contact Eclipse Security to help determine CVSS rating. They are a pain in the arse and entirely too subjective. Is there a process to reach out and schedule a review for something like that?
I'd propose a thought: is it even advisable to employ GitHub for reporting such vulnerabilities? If the embargo is truly exclusive, the chosen individuals should operate on a platform wholly owned and regulated by them. Is that a feasible proposition? I doubt it.
The difference is in the tooling; GitHub personnel would have to use tools or permissions not available to the public to look through something like that, while Eclipse staff has access by default.
So, I'd like to ask, how can we make sure you're confident that the information under embargo stays safe? What steps, announcements, documents, or checks would make you trust the security team and the whole Eclipse Foundation team more?
I don't know what individuals have access to any given project repository. I know a couple of 'groups' do, but I don't know who all the webmasters are nor who the security group members really are. I suspect the information is somewhere in the mountain of Eclipse documentation. A quick look through the security policy page you linked and the EFDP, and nothing jumps out at me.
I can see who the eclipsefdn-security team is easiest since I can find them in our repository and follow a link to see the five members. Looking at their GitHub profiles, they look like a random group of people on GitHub. We even have 'Bob the Builder'! :)
I checked out https://www.eclipse.org/security/team/ under the Team Members, and the five people listed there as the staff are not the same five people who are in this eclipsefdn-security group. Fred G looks to be missing.
Having a clear Embargo section on /security may be warranted.
A single page listing everyone and their qualifications with access to project repositories would be welcome. Or since it is easy to find out who eclipsefdn-security are within the repository, would it be good to have a similar group for eclipsefdn-webmasters as well so you can easily determine who has access within the repository itself?
[edit] Lord knows I am the last person who should criticize keeping a website or documentation up to date, and I apologize for the nitpick.
The difference is in the tooling; GitHub personnel would have to use tools or permissions not available > to the public to look through something like that, while Eclipse staff has access by default.
Jesse, your take is a very interesting one.
All sysadmins have the ability to "look through" stuff, this is no different for GitHub sysadmins. But they (and we) don't -- we're all bound to a high level of ethics and professionalism.
EF IT's primary purpose is the success of all its projects, and you interact with many of us by name on these channels, at conferences and in meetings to that end.
I sense that your sentiment of "A lot is going on at Eclipse that is a mystery to me" is a result of all your years at Eclipse talking directly to EF folks, people with names and faces, on public channels such as these; this shows you who we are and how much is going on behind the scenes.
On the flip side, you seem to favor trust with GitHub because the Infra team, although magnitudes larger than that of the EF, is an anonymous, invisible entity to you, and the GH public UI toolset is all you're exposed to wrt GitHub internals, so perhaps no sense of "mystery going on".
how many Jetty committers and contributors knew about the issue before the release? (did it change during the work?)
Initially, four, the rest once the scope and exposure were determined within the project, and we were moving on to determining the release process and testing.
By which channel did you receive the notification?
Two different parties reached out via email, knowing we were implementors of an http/2 stack.
What was the infrastructure configuration used for tests (the usual one, or a dedicated one?).
public/private build infrastructure we normally use and maintain ourselves
If you were to be part in another embargo, what would you do the same and what would you change?
I suspect we would have started by forking into a private repository and working there with a subset of invited committers, shifting it into an advisory on the main repo once we were in the second phase, as described above.
@droy
Certainly, you make fair points regarding Eclipse vs GitHub personnel and anonymity. Ultimately, I try and view it as a measure of mitigating risks. I understand why Eclipse staff have access to a project's repository. Ultimately, we are fine with the current levels of trust in a project-based CVE. Also fine with the structure, layout, and access that Eclipse Staff have, part and parcel of being in an open-source foundation. None of it is strictly necessary because we could manage the repo ourselves just fine, but we get it. Keeping the number of Eclipse Staff with access to a bare minimum would be optimal.
But for whatever reason, embargoed issues feel like a different beast and should maybe be treated differently. That is why I commented here in the first place.
In #4166, I appreciate the concept where individuals in a security role would have only read access to the repository. These individuals would be elected in a manner similar to committers. However, the criteria for their nomination would not necessarily be based on past contributions. Instead, it would focus on the nominee's understanding of security processes. There could be various relationships between the set of committers and the members of the project's security team: they could be completely separate, partially overlapping, or entirely the same.
We should consider establishing a process that allows a project to determine if all committers are automatically included in the security team, eliminating the need for a separate election. Perhaps this could require approval from the PMC?
This could be a part of the initial project setup: either security team is the whole committers group, or a separate one (I would expect a subset of committers in a general case). Then a switch between the two modes by a decision of committers approved by the PMC. In case of a separate group, an election for each security team member seems a good practice.
A scenario in which a designated security team has read access to otherwise private repos/forks used for vulnerability management so that they can give advice to committers who decide what changes are made to project content is, I believe, consistent with the EDP.
If write access to private repos/forks used for vulnerability management is to be restricted to some subset of the project committers, then the rules that determine which committers have that access need to follow our usual level playing field principles and be documented.
If write access to private repos/forks used for vulnerability management is to be restricted to some subset of the project committers, then the rules that determine which committers have that access need to follow our usual level playing field principles and be documented.
Security always involves trust, and no process can guarantee the adherence to vendor neutrality in this context. So, if certain individuals are elected by the committers to assume a 'security' role, could we amend the EFDP to stipulate that these individuals are entrusted with the responsibility to grant access to private forks in a manner that upholds vendor neutrality? The rule will be trust.
There are no guarantees in anything. As long as the rules don't limit participation based on employer or employment status, we're on the right track.
The scenario that I'm pushing back on is one where a security team that does not have committer status pushes updates into a repository without knowledge or involvement of project committers. This feels like an antipattern.
I can envision a scenario where work is done by a security team in a private fork that is ultimately rolled into the public project repository via a pull/merge request that's merged by a committer. In this scenario, the security team doesn't necessarily need to have to have committer status, or any kind of official status with regard to the EDP. The security team contribution would effectively be no different from any other contribution that arrives via pull request.
Having said that, I assume that there are practical limitations that require that we grant equivalent to committer privileges to security team members which necessitate (at least) that we have committer agreements in place for anybody in the "security" role.
FWIW, I'm trying to shape a scenario where we can designate a security team right now without waiting for the (at least) two- to three-month cycle required to update the EDP. As long as committers are the ones who decide what goes into the public repository and the security team contributors have signed the ECA, I don't think an EDP change is required.
In any practical situation I can think of (in a normally functional project), external security team will always want to ask at least one person knowledgeable about the project (a committer then) to help. The main reason would be the risk to introduce regressions. A committer would be needed to make sure all tests are run, coding style is what the project expects before submitting a potentially critical update and avoid trivial discussions on that merge request. So I would say that a typical project will need at least one committer in the security team.
Even if a committer is not part of the security team, merging a private fork into the project's main repository can only be done by a committer.
Having said that, I assume that there are practical limitations that require that we grant equivalent to committer privileges to security team members which necessitate (at least) that we have committer agreements in place for anybody in the "security" role.
On GitHub, I don't foresee any limitations, but on gitlab.eclipse.org, it might be necessary to grant committer-equivalent privileges to security team members, indeed. Regardless, I concur that it is essential for security team members to have an ECA and ICA/MCCA on file.
FWIW, I'm trying to shape a scenario where we can designate a security team right now without waiting for the (at least) two- to three-month cycle required to update the EDP. As long as committers are the ones who decide what goes into the public repository and the security team contributors have signed the ECA, I don't think an EDP change is required.
I appreciate this perspective. However, do you think we should still consider modifying the EDP to formalize the 'security' role more clearly?
The scenario that I'm pushing back on is one where a security team that does not have committer status pushes updates into a repository without knowledge or involvement of project committers. This feels like an antipattern.
This should be a dealbreaker for any solution. The only people who should have commit rights to a project are the people who have built karma and gone through a committer election process. If the security team can not get someone in a project to pay attention to a security issue the project should be put through whatever process there is for a misbehaving project.
The proposals linked go exactly in that direction. The Security Team must have at least one committer. We assume it will be they pushing the code if needed.