There have been discussions about relaxing the long-established SimRel requirement to sign all the jars/bundles of every project's release train contributions.
The IDE Working Group Steering Committee was asked for an opinion/position on the topic. I.e., are signed jars important to your organization? The general sense is that signing in and of itself is not so important but rather overall security is key, simply stated as "Build artifacts made available at the Eclipse Foundation are verifiably the ones built by respective projects."
The planning council is tasked with deciding on an appropriate strategy (rules) for ensuring that downloaded artifacts are "secure" and are actually verified to be exactly the ones produced by the projects.
Signing definitely serves that purpose and has the advantage of being verifiable even after the artifacts exist on the machine.
That being said, it has the huge disadvantage that it modifies the artifact such that if the consumed jar was already an OSGi bundle, it needs to be given a new version/ID to produce a result conforming to the current rules. It's also a disadvantage that signatures expire, requiring bundles to be signed yet again.
An alternative "external signature" approach has been proposed and prototyped/implemented.
We could adopt this as an alternative approach to signing, perhaps restricting it to those situations where the contributed jars are not built on Eclipse infrastructure and hence are not readily signed as they are today by Tycho builds on Eclipse's CI infrastructure.
Is this alternative approach sufficient? What are the draw backs and advantages? (Where is it documented?)
Or taking one more step back, do we actually need anything beyond secure metadata (https) and SHAs to verify that the artifacts published to a repository are exactly the same ones (the same bytes) downloaded to the client from the internet? Even if we don't strictly need anything in addition to this, do we nevertheless want an additional (alternative) layer of security? After all, we all do signing already so other than the (significantly) longer build times, disabling internal signing now doesn't buy us anything (beyond faster builds).
Designs
Child items
...
Linked items
0
Link issues together to show that they're related or that one is blocking others.
Learn more.
"Build artifacts made available at the Eclipse Foundation are verifiably the ones built by respective projects
So basically, a post-build step that would verify (in a way or another) that what's part of the SimRel site or IDE is correct is enough?
Eg, we don't care about what happens on users' end as long as what's on the infra can be verified?
"Build artifacts made available at the Eclipse Foundation
are verifiably the ones built by respective projects."
Won't this require a "certificate per project", afaik the certificate is the same for all artifacts at the momment (most probably because Eclipse is not a code-signing-certificate authority to create individual ones)
Signing definitely serves that purpose and has the advantage of being
verifiable even after the artifacts exist on the machine.
As suggested in Bug 575540, if we handle signatures as extra files/artifacts a signature can work similar without modify the artifact but still allow to have signing alongside with the artifact.
Or taking one more step back, do we actually need anything beyond secure
metadata (https) and SHAs to verify that the artifacts published to a
repository are exactly the same ones (the same bytes) downloaded to the
client from the internet?
I think signed-jars just give a false security implication as long as very committer has potentially access to (eclipse) signing infrastructure, so from my point of view meta-data is enough (when talking about mirrors) but in general this also does not help much as long one can add any site to eclipse without any verification (you just need one 'bad' update-site and can then install any code you like via patch features).
Even if we don't strictly need anything in addition to this,
do we nevertheless want an additional (alternative) layer
of security? After all, we all do signing already so other than the
(significantly) longer build times, disabling internal signing now doesn't
buy us anything (beyond faster builds).
If we really like to make things more secure, we probably need some kind of PGP-Web-Of-thrust + (crypographically) signed commits so every line of code/change is verified/reviewed by a "completely-trusted" person ... but that's probably far beyond the scope of this.
"Build artifacts made available at the Eclipse Foundation are verifiably the ones built by respective projects
So basically, a post-build step that would verify (in a way or another) that
what's part of the SimRel site or IDE is correct is enough?
Eg, we don't care about what happens on users' end as long as what's on the
infra can be verified?
Well, let's look closely at what I said and what you paraphrased:
"do we actually need anything beyond secure metadata (https) and SHAs to verify that the artifacts published to a repository are exactly the same ones (the same bytes) downloaded to the client from the internet"
!=
"we don't care about what happens on users' end as long..."
If we're are going to treat this process a supreme court with an army of lawyers, where every last word and phrase is parsed into all possible meanings, and where we can pick one of those meanings, one that's effectively nonsensical and contradictory to the original intent, as perhaps the intended meaning, then we can continue with further refinements of the wording during the next meeting to try to word the intent in such a way that no one will be able to misinterpret it such that it's contrary to the original intent.
Of course the fundamental point is that one can (and does) verify that the bytes published by the project are exactly the bytes downloaded to the client. I don't speak for the Steering Committee, but I believe this to be the intent.
"Build artifacts made available at the Eclipse Foundation
are verifiably the ones built by respective projects."
Won't this require a "certificate per project", afaik the certificate is the
same for all artifacts at the momment (most probably because Eclipse is not
a code-signing-certificate authority to create individual ones)\
It's hard to imagine why would one need a certificate per project. That seems clearly unworkable..
Signing definitely serves that purpose and has the advantage of being
verifiable even after the artifacts exist on the machine.
As suggested in Bug 575540, if we handle signatures as extra files/artifacts
a signature can work similar without modify the artifact but still allow to
have signing alongside with the artifact.\
This is a further implementation detail of whether the external signature is a separate "artifact/file" versus a data item in the repository metadata XML (as is the case for the SHAs currently).
Or taking one more step back, do we actually need anything beyond secure
metadata (https) and SHAs to verify that the artifacts published to a
repository are exactly the same ones (the same bytes) downloaded to the
client from the internet?
I think signed-jars just give a false security implication as long as very
committer has potentially access to (eclipse) signing infrastructure, so
from my point of view meta-data is enough (when talking about mirrors) but
in general this also does not help much as long one can add any site to
eclipse without any verification (you just need one 'bad' update-site and
can then install any code you like via patch features).\
Yes, we discussed the fact that signatures don't ensure that the jars themselves don't contain bad logic and bad security loop holes.
The other things you describe are part of the "threat model". For sites hosted at Eclipses, no one can "just add any site" except authorized committers/projects who can edit the sites and provide the bytes for clients to download. For sites hosted elsewhere, all bets or off: any one can sign any bogus thing with any arbitrary signatures and make it look like it's fine and good. Or not?
Even if we don't strictly need anything in addition to this,
do we nevertheless want an additional (alternative) layer
of security? After all, we all do signing already so other than the
(significantly) longer build times, disabling internal signing now doesn't
buy us anything (beyond faster builds).
If we really like to make things more secure, we probably need some kind of
PGP-Web-Of-thrust + (crypographically) signed commits so every line of
code/change is verified/reviewed by a "completely-trusted" person ... but
that's probably far beyond the scope of this.
So we don't actually have any solution at this point because of course nothing is 100% secure when it comes to software. So we only have open ended problems...
OK, I see I got confused by the fact that "verifiably" doesn't explicit whom and when is supposed to be able to verify; but OK to assume it's the user during the installation.
And, in that case, I agree with your comment.
As discussed in bug 575688 (which is so far not really a plan for action), secured p2 metadata over HTTPS with checksums do guarantee we install the right content for a given artifact, and the content that's installed can be verified before restarting the IDE (user can compute checksums locally and compare them with metadata they can find on download.eclipse.org). However, this only works if we ensure that the metadata themselves are only coming from download.eclipse.org; which in the current form is not something we can guarantee because:\
Users usually install from other sources, and the "Contact all software sites" options is checked by default, so other sources -and maybe some less reliable ones- can influence the result and push their artifacts.\
As far as I know, a p2 repository on download.eclipse.org may be able to reference external sources (in composite or repository-references). However I don't think this is really happening in practice.
If the goal is purely to establish verifiability, then signatures are not relevant at all. Signatures are relevant when it comes to trust, and it doesn't seem like building trust strategies is a priority. Basically, things like PGP seem out of the scope.
(In reply to Christoph Laeubrich from comment #2)\
(In reply to Ed Merks from comment #0)\
"Build artifacts made available at the Eclipse Foundation
are verifiably the ones built by respective projects."
Won't this require a "certificate per project", afaik the certificate is the
same for all artifacts at the momment (most probably because Eclipse is not
a code-signing-certificate authority to create individual ones)
It's hard to imagine why would one need a certificate per project. That
seems clearly unworkable..
Sorry for possible confusion, I don't mean project in the sense of an (source) project you import into the IDE but "eclipse projects" like, jetty, platform, ... or how should one understand the "know what respective (eclipse) project has build" if all use the same signature/certificat/...? At least at maven artifacts I have seen individual projects using individual PGP keys so it could be part of the project setup to have a dedicated key for each project.
Signing definitely serves that purpose and has the advantage of being
verifiable even after the artifacts exist on the machine.
As suggested in Bug 575540, if we handle signatures as extra files/artifacts
a signature can work similar without modify the artifact but still allow to
have signing alongside with the artifact.
This is a further implementation detail of whether the external signature is
a separate "artifact/file" versus a data item in the repository metadata XML
(as is the case for the SHAs currently).
I more can think of (what is an extension to my original request!) that P2 could even 'install' the PGP signature into an eclipse so it could be queried/checked even at startup/runtime through P2 query API.
The other things you describe are part of the "threat model". For sites
hosted at Eclipses, no one can "just add any site" except authorized
committers/projects who can edit the sites and provide the bytes for clients
to download. For sites hosted elsewhere, all bets or off: any one can sign
any bogus thing with any arbitrary signatures and make it look like it's
fine and good. Or not?
I mean what mickael also phrased out that currently any IU is installed/updated from any source (isn't it?) so maybe it would be good to have a property in the metadata that states artifacts installed from this source are only allowed to be updated/installed from a given set of "save hosts this could contain wildcards like eclipse.org/updates/platform/* or something
So we don't actually have any solution at this point because of course
nothing is 100% secure when it comes to software. So we only have open
ended problems...
No I just wanted to note that we already have good ways to make sure the artifact is not tampered (md5, sha256, pgp,...) if we want more than "the artifact is not modified since it was published in this metadata" it will require a lot more work.
OK, I see I got confused by the fact that "verifiably" doesn't explicit whom
and when is supposed to be able to verify; but OK to assume it's the user
during the installation.
And, in that case, I agree with your comment.
As discussed in bug 575688 (which is so far not really a plan for action),
secured p2 metadata over HTTPS with checksums do guarantee we install the
right content for a given artifact, and the content that's installed can be
verified before restarting the IDE (user can compute checksums locally and
compare them with metadata they can find on download.eclipse.org). However,
this only works if we ensure that the metadata themselves are only coming
from download.eclipse.org; which in the current form is not something we can
guarantee because:\
Users usually install from other sources, and the "Contact all software
sites" options is checked by default, so other sources -and maybe some less
reliable ones- can influence the result and push their artifacts.
I think this is a super important point you make. The "threat model" here is that someone provides an update site that has artifacts with IDs that match the artifact IDs distributed by some Eclipse project and while those "alternate artifacts" might well have valid checksums and their metadata might well be secured by https, nevertheless the artifact bytes could well be different bytes than the ones actually produced by an Eclipse project. That seems like a problem we should prevent.
As far as I know, a p2 repository on download.eclipse.org may be able to
reference external sources (in composite or repository-references). However
I don't think this is really happening in practice.\
Yes, this seems a little more far fetched. We're really not supposed to be redistributing arbitrary content that hasn't been reviewed/approved.
If the goal is purely to establish verifiability, then signatures are not
relevant at all. Signatures are relevant when it comes to trust, and it
doesn't seem like building trust strategies is a priority. Basically, things
like PGP seem out of the scope.
Yes, though I wonder here if signatures are really about trust versus about "certification of origin". It's already been pointed out that trust is a pretty strong word, i.e., "we (Eclipse) signed this jar so you can trust it's secure." But can we really trust that the software actual functions securely? Not so much I think...
I bring this up because, in relation to your point 1. above, I think signatures (internal and/or external) prevent substitution of artifacts except by those that are also signed and therefore also associated with a particular origin.
Yes, though I wonder here if signatures are really about trust versus about
"certification of origin". It's already been pointed out that trust is a
pretty strong word, i.e., "we (Eclipse) signed this jar so you can trust
it's secure." But can we really trust that the software actual functions
securely? Not so much I think...
As mentioned above we can't really trust "eclipse", but that's why I think there should not be a "global" eclipse certificate but projects specific ones, because I might trust 'tycho' as I regular follow the code stream and review the code changes, but I won't trust an arbitrary eclipse project because I don't know anything about it.
So "certification of origin" can also be a source of "trust"
I bring this up because, in relation to your point 1. above, I think
signatures (internal and/or external) prevent substitution of artifacts
except by those that are also signed and therefore also associated with a
particular origin.
That sounds like a good idea, that one artifact can only be updated by the same "origin" (or at least give the user a strong warning if not). Again here it would be usefull if it is more fine grained than "eclipse-or-not-eclipse"
(In reply to Christoph Laeubrich from comment #2)\
(In reply to Ed Merks from comment #0)\
"Build artifacts made available at the Eclipse Foundation
are verifiably the ones built by respective projects."
Won't this require a "certificate per project", afaik the certificate is the
same for all artifacts at the momment (most probably because Eclipse is not
a code-signing-certificate authority to create individual ones)
It's hard to imagine why would one need a certificate per project. That
seems clearly unworkable..
Sorry for possible confusion, I don't mean project in the sense of an
(source) project you import into the IDE but "eclipse projects" like, jetty,
platform, ... or how should one understand the "know what respective
(eclipse) project has build" if all use the same signature/certificat/...?
At least at maven artifacts I have seen individual projects using individual
PGP keys so it could be part of the project setup to have a dedicated key
for each project.\
I don't believe it's important to know which project produced the artifacts but rather to know that some Eclipse project produced the artifacts, i.e., the certification of origin back to Eclipse. There's typically much data in the artifacts to track them back to projects.
\
Signing definitely serves that purpose and has the advantage of being
verifiable even after the artifacts exist on the machine.
As suggested in Bug 575540, if we handle signatures as extra files/artifacts
a signature can work similar without modify the artifact but still allow to
have signing alongside with the artifact.
This is a further implementation detail of whether the external signature is
a separate "artifact/file" versus a data item in the repository metadata XML
(as is the case for the SHAs currently).
I more can think of (what is an extension to my original request!) that P2
could even 'install' the PGP signature into an eclipse so it could be
queried/checked even at startup/runtime through P2 query API.\
Indeed, external signatures could be "installed" to be available for later use; perhaps just saved as part of the profile which is effectively an artifact repository.
The other things you describe are part of the "threat model". For sites
hosted at Eclipses, no one can "just add any site" except authorized
committers/projects who can edit the sites and provide the bytes for clients
to download. For sites hosted elsewhere, all bets or off: any one can sign
any bogus thing with any arbitrary signatures and make it look like it's
fine and good. Or not?
I mean what mickael also phrased out that currently any IU is
installed/updated from any source (isn't it?) so maybe it would be good to
have a property in the metadata that states artifacts installed from this
source are only allowed to be updated/installed from a given set of "save
hosts this could contain wildcards like eclipse.org/updates/platform/* or
something\
That sounds significantly complicated and then is fragile when organization mirror repositories.
So we don't actually have any solution at this point because of course
nothing is 100% secure when it comes to software. So we only have open
ended problems...
No I just wanted to note that we already have good ways to make sure the
artifact is not tampered (md5, sha256, pgp,...) if we want more than "the
artifact is not modified since it was published in this metadata" it will
require a lot more work.
Yes, though I wonder here if signatures are really about trust versus about
"certification of origin". It's already been pointed out that trust is a
pretty strong word, i.e., "we (Eclipse) signed this jar so you can trust
it's secure." But can we really trust that the software actual functions
securely? Not so much I think...
As mentioned above we can't really trust "eclipse", but that's why I think
there should not be a "global" eclipse certificate but projects specific
ones, because I might trust 'tycho' as I regular follow the code stream and
review the code changes, but I won't trust an arbitrary eclipse project
because I don't know anything about it.
So "certification of origin" can also be a source of "trust"\
This seems to me way more complicated than what we currently have and, given the stacked nature of dependencies, trusting one project but not another just doesn't seem feasible. E.g., if one don't trust EMF or ECF, then there is no release train...
I bring this up because, in relation to your point 1. above, I think
signatures (internal and/or external) prevent substitution of artifacts
except by those that are also signed and therefore also associated with a
particular origin.
That sounds like a good idea, that one artifact can only be updated by the
same "origin" (or at least give the user a strong warning if not). Again
here it would be usefull if it is more fine grained than
"eclipse-or-not-eclipse"
I'm not sure how far off the deep end we need to go on this. People will install bundles from other sites. The marketplace has 1000 of them. I believe human beings will not be making fine grained decisions about each bundle but rather course grained ones. "Is this bundle from an origin that certifies that this thing I'm installing comes from that origin"? And this information is generally not even presented to the users. Mostly licenses and missing certification are presented...
Keep in mind that this issue arose to reduce the effort around contributing to SimRel. Ideally we don't make life more complicated but also we don't make things less secure either...
This seems to me way more complicated than what we currently have and, given
the stacked nature of dependencies, trusting one project but not another
just doesn't seem feasible. E.g., if one don't trust EMF or ECF, then there
is no release train...
Security comes at a price... I won't expect someone to install EMF or ECF but using some kind of pre-build package, which can ship with the necessary trusted "root-certificates"...
\
I bring this up because, in relation to your point 1. above, I think
signatures (internal and/or external) prevent substitution of artifacts
except by those that are also signed and therefore also associated with a
particular origin.
That sounds like a good idea, that one artifact can only be updated by the
same "origin" (or at least give the user a strong warning if not). Again
here it would be usefull if it is more fine grained than
"eclipse-or-not-eclipse"
I'm not sure how far off the deep end we need to go on this. People will
install bundles from other sites. The marketplace has 1000 of them. I
believe human beings will not be making fine grained decisions about each
bundle but rather course grained ones.
I won't expect each bundle to be decided over, but if the SSL certificate of a host changes after I first have contacted it I get a strong warning that something might be wrong.
Decisions in a "web-of-thrust" can also be hierarchical, so if I thrust "Eclipse" and Eclipse has claimed high trust in "Apache" I can also thrust "Apache" bundles.
"Is this bundle from an origin that certifies that this thing I'm
installing comes from that origin"? And this
information is generally not even presented to the users. Mostly licenses
and missing certification are presented...
But how is A PGP signature different from a certificate? So I think the most "unobtrusive" option would be to handle them the same here. If a PGP signature of a key completely unknown to the user a dialog will ask if the user likes to trust it (that's how it work currently but without the choice to permanently trust it as far as I know) and then in the future I don't get bothered.
Keep in mind that this issue arose to reduce the effort around contributing
to SimRel. Ideally we don't make life more complicated but also we don't
make things less secure either...
Security comes at a price... Given that Maven-Central enforces PGP signed artifacts and there are hundreds of thousands artifacts and projects contributing content there it seems setting up a PGP key can be assumed not so complicated.
(In reply to Christoph Laeubrich from comment #11)
(In reply to Ed Merks from comment #10)\
This seems to me way more complicated than what we currently have and, given
the stacked nature of dependencies, trusting one project but not another
just doesn't seem feasible. E.g., if one don't trust EMF or ECF, then there
is no release train...
Security comes at a price... I won't expect someone to install EMF or ECF
but using some kind of pre-build package, which can ship with the necessary
trusted "root-certificates"...\
The installer installs everything from scratch so people do in fact do exactly this kind of "decision" whenever they install a package via the installer.
And I thought the root certificates are in the JRE/JDK?
\
I bring this up because, in relation to your point 1. above, I think
signatures (internal and/or external) prevent substitution of artifacts
except by those that are also signed and therefore also associated with a
particular origin.
That sounds like a good idea, that one artifact can only be updated by the
same "origin" (or at least give the user a strong warning if not). Again
here it would be usefull if it is more fine grained than
"eclipse-or-not-eclipse"
I'm not sure how far off the deep end we need to go on this. People will
install bundles from other sites. The marketplace has 1000 of them. I
believe human beings will not be making fine grained decisions about each
bundle but rather course grained ones.
I won't expect each bundle to be decided over, but if the SSL certificate of
a host changes after I first have contacted it I get a strong warning that
something might be wrong.\
What you outline does suggest that when installing many dozens of projects that I will be making dozens of trust decisions. Each EPP package is composed from a great many projects. Even the Platform SDK is many projects. So multiple trust decisions sounds rather unworkable...
Decisions in a "web-of-thrust" can also be hierarchical, so if I thrust
"Eclipse" and Eclipse has claimed high trust in "Apache" I can also thrust
"Apache" bundles.\
Now we've gone away from per-project back to coarse grained "hosts". I guess my problem here is that I don't actually concretely know all the details of the PGP proposal and perhaps I'm making some poor assumptions as a result...
"Is this bundle from an origin that certifies that this thing I'm
installing comes from that origin"? And this
information is generally not even presented to the users. Mostly licenses
and missing certification are presented...
But how is A PGP signature different from a certificate?
I didn't suggest they are different. I assumed they are certificate-based as are internal signatures. But again, not that I'm making assumptions.
So I think the most
"unobtrusive" option would be to handle them the same here. If a PGP
signature of a key completely unknown to the user a dialog will ask if the
user likes to trust it (that's how it work currently but without the choice
to permanently trust it as far as I know) and then in the future I don't get
bothered.\
Currently the user is prompted only if the root certificate isn't known in the JRE. I thought that would still be the case. Perhaps I'm making yet another bad assumption.
Keep in mind that this issue arose to reduce the effort around contributing
to SimRel. Ideally we don't make life more complicated but also we don't
make things less secure either...
Security comes at a price... Given that Maven-Central enforces PGP signed
artifacts and there are hundreds of thousands artifacts and projects
contributing content there it seems setting up a PGP key can be assumed not
so complicated.
I didn't suggest that PGP signing is complicated. I was under the impression that it is already implemented. It's all this per-project trust/certification stuff that starts to seem complicated to me. Also, I believe the current thinking was that this new alternative external signing is only used for things not already easily signed as we've been doing for years. But perhaps you're thinking differently on that front too...
The scope here seems to be not what I expected - this conversation started as (from Mickael's email[1]) a request to drop jarsigning requirement (with the open ended question of what security do we want for simrel). But now seems to have transformed into discussions about rearchitecting the entire security model of p2.
The IDE WG and the Planning Council are clearly in favour of change, and the requirement is very short. Note that nothing in the requirement from the IDE WG says how it needs to be verifiable, that can require multiple steps.
For example, a project contributes to SimRel, SimRel does a mirror operation. That is verifiable today and does not require signing. Then Eclipse IDE installs from SimRel, and it is possible using the hashes to verify that what was downloaded was what was expected.
Why is it more complicated than that to meet the requirement from WG? Which is back to last para from Comment 0 which seems to have one answer:
(Christoph Laeubrich from comment #2)
from my point of view meta-data is enough
The scope here seems to be not what I expected - this conversation started
as (from Mickael's email[1]) a request to drop jarsigning requirement (with
the open ended question of what security do we want for simrel). But now
seems to have transformed into discussions about rearchitecting the entire
security model of p2.\
Yes, I was hoping to bound the problem not have it snowball into something bigger.
The IDE WG and the Planning Council are clearly in favour of change, and the
requirement is very short.
Yes, though apparently easy to misinterpret...
Note that nothing in the requirement from the IDE
WG says how it needs to be verifiable, that can require multiple steps.
For example, a project contributes to SimRel, SimRel does a mirror
operation. That is verifiable today and does not require signing. Then
Eclipse IDE installs from SimRel, and it is possible using the hashes to
verify that what was downloaded was what was expected. \
Yes, though Mickael had an important observation. If the "available" update sites includes some non-eclipse-hosted site, the artifact metadata for some arbitrary installable unit could well come from such an alternate site in which case the artifact itself could be some alternative bits version of the artifact (with the correct hash sum in that alternative artifact metadata) so nothing is noticed as bad or wrong...
Do you see what I/he means?
Such spoofing is possible even using https, and is even possible when signing is enforced, but of course in the case of signing, the alternative jar would then also need to be signed (by something other than Eclipses though), which at least would make it more trackable...
Do we care about any of this?
Why is it more complicated than that to meet the requirement from WG? Which
is back to last para from Comment 0 which seems to have one answer:
\
(Christoph Laeubrich from comment #2)
from my point of view meta-data is enough
Yes - I understand this use case. It is certainly a problem that Eclipse community can look to solve. @Mikael/@Ed do you see this as blocking to change the install model? If not, lets split that issue out. The issue is not new, it is orthogonal to the issue here, and also does not require signing to solve.
Such spoofing is possible even using https, and is even possible when
signing is enforced, but of course in the case of signing, the alternative
jar would then also need to be signed (by something other than Eclipses
though), which at least would make it more trackable...
Do we care about any of this?
Yes. We want to make sure that users install what they think they are installing. If I (as a user) put a whole bunch of p2 sites in my Available Sites list that is me saying I am happy with the content of these p2 sites. So the only thing we should care about is that the artifacts are the ones in these p2 sites as determined by the p2 metadata these sites serve up (this is why Bug 575688 is important - Eclipse needs to make sure it is downloading the metadata the user intended)
Us (as the maintainers of SimRel and Eclipse projects) have a responsibility that everything made available on p2 sites located on download.eclipse.org are what we intended. That is done by delegating trust to individual projects to only place on download.eclipse.org valid artifacts, and it is done by the Webmaster securing access to download.eclipse.org as they do now with SSH via the CI infra.
If we assume that users do trust the site they're adding to their available software or when installing content (they can verify those sites in advance for trust - eg by looking at metadata), then it doesn't become an issue if one site "leaks" artifact in the resolution: it's consistent with user decision to trust a site.
When all sites are trusted, there is no point in further signature and checks IMO, just like most p2 repo have unsigned content and most users just click the "Install anyway" (read "I don't care") button without verifying just because they trust their providers.
Wherever artifacts do come from, it's always possible to verify them just after installation and before a restart (ie before newly installed content is activated and executed) by comparing local installation to what's expected from the reference provider.
If we assume that users do trust the site they're adding to their available
software or when installing content (they can verify those sites in advance
for trust - eg by looking at metadata)
P2 sites can also come from p2-touchpoint instructions when installing stuff if I remember correctly.
Anyways as you mentioned I think there are two "user-groups"
who don't care and click any button to "get that stuff installed" as they think it is useful\
who are not allowed to install stuff because of company policies
(In reply to Christoph Laeubrich from comment #17)\
P2 sites can also come from p2-touchpoint instructions when installing stuff
if I remember correctly.
p2 touchpoints are visible in the metadata, similarly to repository
references. So this can be verified even before installation by those who
care.
Do you really know anyone reading and fully understand p2 metadata.xml before adding an update-site? One even needs to investigate each artifact if it contains a p2.inf to be absolutely sure...
I think download.eclipse.org now automatically redirect http requests to https.
@Christoph: am I right? If so, does that mean that we have some guarantee that what we get from download.eclipse.org with p2 is always "safe" ? Typically, does it guarantee that p2 metadata cannot be modified when fetching them from download.eclipse.org ?
I think download.eclipse.org now automatically redirect http requests to
https. @Christoph: am I right?
At least in the browser, I remember there where some issues in the past with P2 but haven checked for a long time. Technically we should add a preference to block all http update-sites because a redirect can help users in a migration phase but does not help with security as a malicious host entry can still prevent the redirection.
If so, does that mean that we have some guarantee
that what we get from download.eclipse.org with p2 is always "safe" ?
It only grantees that if I contact download.eclipse.org it is the "real" server owned by eclipse foundation, whether or not the content itself is "safe" there is no guarantee, actually any of the (how many are there?) committers of a eclipse project can place any content there :-)
Typically, does it guarantee that p2 metadata cannot be modified when
fetching them from download.eclipse.org ?
As far as I know, the metadata can be replaced any time with any content, there is no "read-only"/"write-once" implemented.
Just one note about SSL, and even if it will scare Ed that something like this is possible and available there are so called "Enterperise SSL Proxy Servers" that act as a Man-In-The-Middle to decrypt any SSL traffic in a corporate network see for example [1].
So actually if you want to be really sure you need to:
make sure SSL is used in the first place (do not rely on automatic redirect!)\
Check the root / intermediate certificates as well to see where the "chain-of-trust" originates from.\
Check the certificate and compare it with some public information (don't know if Eclipse foundation publishes used certificate checksums)
For sure this could be automated in a way to warn a user if the certificate for well-known hosts is not the same, but for sure all of this does not makes it easier for the user, that's why I always asked for what we like to archive:
A) as easy as deployment without any user interaction --> can't be made reliable save, neither with code-signing nor hashsum, but might be okay in most situations if we only like to make sure an artifact is not tampered/broken.
B) gain as much "trust" as possible that only trustful content is ever installed --> this will require a lot of work, security-review and in some extend user-interaction
Of course there is a wide range between A and B ... Also please note I'm just a very interested enthusiast with security/encryption so don't take me as an authoritative instance here :-)
Several enterprise-level proxies support re-encrypting the connections
your browser makes using a corporate certification authority.
Essentially the administration team can push out a certificate to your
workstation via group policies, and add it to the list of trusted authorities.
The proxy then has the private key corresponding to that certificate and
generates a certificate for each hostname on the fly.
(In reply to Christoph Laeubrich from comment #23)
(In reply to Mickael Istria from comment #22)\
I think download.eclipse.org now automatically redirect http requests to
https. @Christoph: am I right?
At least in the browser, I remember there where some issues in the past with
P2 but haven checked for a long time.
So I verified and download.eclipse.org sends a 301/Moved Permanently to Eclipse/p2/ECF when trying to get an http:// URL.
So it seems like one cannot install content from download.eclipse.org with plain HTTP.
Whether this cause an installation failure or a redirection doesn't really matter; the simple fact that it's not possible to go to plain HTTP is a source of security per se.
Technically we should add a preference
to block all http update-sites because a redirect can help users in a
migration phase but does not help with security as a malicious host entry
can still prevent the redirection.
I don't think it is necessary in the context of this discussion. A preference wouldn't really help in having a strategy that enforces "Build artifacts made available at the Eclipse Foundation are verifiably the ones built by respective projects".
It could be a topic worth discussing for p2, but not important here.
If so, does that mean that we have some guarantee
that what we get from download.eclipse.org with p2 is always "safe" ?
It only grantees that if I contact download.eclipse.org it is the "real"
server owned by eclipse foundation, whether or not the content itself is
"safe" there is no guarantee, actually any of the (how many are there?)
committers of a eclipse project can place any content there :-)
Assuming that "respective projects" is more or less "committers team" when it comes to responsibility (committers have some responsibility for the project, for better or worse, as they sign in the paperwork to become committer for the 1st time); then it means that download.eclipse.org forcing HTTPS make that we have the guarantee that the bits fetched from download.eclipse.org are the ones that were pushed by the committers/projects.
Typically, does it guarantee that p2 metadata cannot be modified when
fetching them from download.eclipse.org ?
As far as I know, the metadata can be replaced any time with any content,
there is no "read-only"/"write-once" implemented.
I was more thinking about being modified in the transfer.
(In reply to Christoph Laeubrich from comment #24)
Just one note about SSL, and even if it will scare Ed that something like
this is possible and available there are so called "Enterperise SSL Proxy
Servers" that act as a Man-In-The-Middle to decrypt any SSL traffic in a
corporate network see for example [1].
I think such approach of an infra or a client setup enforcing proxies and so on it not really important here. After all, if the Internet in general is compromised (intentionally with "Enterperise SSL Proxy Servers") or with some other preliminary attack, this is a bit beyond Eclipse's duty to detect and handle that I guess, as such attack would really affect the whole system.
The more we progress in the discussion, the more I have the impression that the current state of the infra already allows that "Build artifacts made available at the Eclipse Foundation are verifiably the ones built by respective projects."; even without signatures.
If I am right, it means we could simply just remove the signature requirement (but still keep it as a recommendation because it's easy to setup when building artifacts locally, so it'd be a pity to not do it). That would allow easier consumption of 3rd-party artifacts, by getting them as it into download.eclipse.org, and fulfill the requirement expressed here.
I guess this discussion is really about the interpretation of the requirement:
"Build artifacts made available at the Eclipse Foundation are verifiably the ones built by respective projects."
more specifically, I think the issue is about when the verification should / can be done.
If the verification of the artifacts only needs to happen at provisioning / download time, IMO signed jars offer little to no value compared to https+sha from EF's infra.
If the verification from the requirement also imply "out-of-band" verification, then digital signature of artifacts is a must have. signed jars is the target platform (JVM) preferred solution and any other solution requires a new system to be defined. A well designed, secured software system based on crypto primitives is hard and should not be defined without experts. See Schneier's Law https://www.schneier.com/blog/archives/2011/04/schneiers_law.html. Jar signature has been defined by experts.
Meandering along this way, I want to mention there are ground work being done to specify a framework for software update systems: TUF https://theupdateframework.io/overview/. It's more an intellectual framework than an specification or a SDK. They have defined a threat model resumed in https://theupdateframework.io/security/. If anyone really want to invent something different from what we have today, I suggest she has a look at this.
We seem to be coming to the conclusion that we can today verify at installation (provisioning) time that artifacts are the ones built through the combination of https, sha, Eclipse Webmaster's policies and procedures and the delegation to Eclipse committers.
That seems to meet the requirements set out by IDE WG that the Planning Council has been asked to execute. There is a Planning Council meeting on Wednesday this week where I hope we can finalise the discussion so that the changed requirement can go into practice.
There are many other topics brought up in this discussion related to security, including verifying artifacts at times other than installation/provisioning ("out-of-band" as Mikaël called it in Comment 26). These are valuable conversations to have, but I want to ensure that these conversations do not interfere with moving forward on getting official approval up and down the decision tree on the removing of jarsigning as a requirement. Please continue these discussions in other/new bugzillas.
It's a fact that download.eclipse.org yields 301 when accessed via http so we've concluded that one cannot download from that host insecurely. That's a sound conclusion because the 301 will result in a new request using https and that request will be secure. But note the phrase "one cannot download from that host insecurely"...
A man-in-the-middle attack, as I understand it, can allow something else hacked into the network to respond to the original http-based request to download.eclipse.org by redirecting it elsewhere, so I don't think it's a sound conclusion that the download.eclipse.org server's redirection is a sufficient source of security because it does not prevent downloading from a different host.
Moreover, if we now remove the signing requirement, we allow not only a man-in-the-middle to yield untrustworthy metadata (as before), we also allow to install unsigned artifacts that can be substitutions for the original Eclipse Foundation's build artifacts (a new and not improved source of insecurity).
This is the danger of non-experts making security decisions as highlighted by Mikaël Barbero comments.
A man-in-the-middle attack, as I understand it, can allow something else
hacked into the network to respond to the original http-based request to
download.eclipse.org by redirecting it elsewhere
A MIT can simply respond with any content (including a tampered update-site), but an MIT is not strictly required, alternatives include DNS spoofing or malicious host-file modifications.
That's why I said that redirections is convince (in a way to not break old http links) but not safe (I don't know if P2 is actually caching the redirect and then never queries http again or is querying http first on the next restart again).\
Moreover, if we now remove the signing requirement, we allow not only a
man-in-the-middle to yield untrustworthy metadata (as before), we also allow
to install unsigned artifacts that can be substitutions for the original
Eclipse Foundation's build artifacts (a new and not improved source of
insecurity).
But that's the question here, do we want/need this level of security in all cases e.g. even if the requirement to sign artifacts is removed this does not mean we enforce removing all signatures.
To come back a bit to the original source that triggers the whole discussion, we have two cases here:
Artifacts compiled from code produced/maintained of an eclipse-project, and I don't see much issues here in requiring it should be signed (to prevent what you have described or give some kind of origin indication or tamper proof).\
Artifacts by some kind of third party project (what of course could be an eclipse project as well but also an Apache or whatever compatible licensed project...) that is consumed (and redistributed!) by an eclipse-project
For the second case we have the problem that the current policy require us to sign the artifact (and creating a new one that would not compare equal to the original e.g. with hashes) if it is included in an update-site for sim-rel and that's what
a) requires additional effort
b) has the chance of duplication (as we are not able to match the name+hash anymore)
c) produces (from my POV) a false assumption of "security" as we literally only check for license compatibility but (correct me if I'm wrong) do not do any security-review before allowing a third-party artifact to be included.
So for me the whole point can be reduced to what kind of security we nee/want for making sure that an artifact not originally produced by sources of the project itself is not tampered/modified/replaced/... but included in a sim-rel repo.
(In reply to Christoph Laeubrich from comment #29)
(In reply to Ed Merks from comment #28)\
A man-in-the-middle attack, as I understand it, can allow something else
hacked into the network to respond to the original http-based request to
download.eclipse.org by redirecting it elsewhere
A MIT can simply respond with any content (including a tampered
update-site), but an MIT is not strictly required, alternatives include DNS
spoofing or malicious host-file modifications.\
So there are more ways to receive metadata content that is not the content one would expect. I seems like you are saying that even if we use https://download.eclipse.org, we could end up at a different host than download.eclipse.org, albeit one that supports https and has its own certificate. Do I read this correctly? Or is it the case that https necessarily reaches the expected named host? (I would have assumed this is the case, otherwise I don't see how https is secure at all.)
That's why I said that redirections is convince (in a way to not break old
http links) but not safe (I don't know if P2 is actually caching the
redirect and then never queries http again or is querying http first on the
next restart again).\
No, p2 doesn't cache the redirection.
It seems to me the server's redirection doesn't buy us any significant level of additional security. What about if the underlying frameworks "redirect" any http URL to an https URL in the code before accessing/fetching the URL's contents? I.e., ensuring programmatically that only https requests go out into the network? (Such that existing repositories and marketplace listings don't stop working.)
Moreover, if we now remove the signing requirement, we allow not only a
man-in-the-middle to yield untrustworthy metadata (as before), we also allow
to install unsigned artifacts that can be substitutions for the original
Eclipse Foundation's build artifacts (a new and not improved source of
insecurity).
But that's the question here, do we want/need this level of security in all
cases e.g. even if the requirement to sign artifacts is removed this does
not mean we enforce removing all signatures.\
We've had this level of security in the past and I'm not sure the argument that being mostly secure is the same as being actually secure.
To come back a bit to the original source that triggers the whole
discussion, we have two cases here:
\
Artifacts compiled from code produced/maintained of an eclipse-project,
and I don't see much issues here in requiring it should be signed (to
prevent what you have described or give some kind of origin indication or
tamper proof).\
Artifacts by some kind of third party project (what of course could be an
eclipse project as well but also an Apache or whatever compatible licensed
project...) that is consumed (and redistributed!) by an eclipse-project
For the second case we have the problem that the current policy require us
to sign the artifact (and creating a new one that would not compare equal to
the original e.g. with hashes) if it is included in an update-site for
sim-rel and that's what
a) requires additional effort
b) has the chance of duplication (as we are not able to match the name+hash
anymore)
c) produces (from my POV) a false assumption of "security" as we literally
only check for license compatibility but (correct me if I'm wrong) do not do
any security-review before allowing a third-party artifact to be included.
So for me the whole point can be reduced to what kind of security we
nee/want for making sure that an artifact not originally produced by sources
of the project itself is not tampered/modified/replaced/... but included in
a sim-rel repo.
Yes, I think we all understand why we came to this issue and of course I'm certainly a huge fan of simplifying things; I resent the fact that my builds spend more time signing than building.
I get the sense that minimally we need to ensure that no http requests for metadata go out into the internet. Is that sufficient to ensure that the metadata requested from https://download.eclipse.org really comes from our download.eclipse.org?
But note the phrase "one cannot download
from that host insecurely"...
A man-in-the-middle attack, as I understand it, can allow something else
hacked into the network to respond to the original http-based request to
download.eclipse.org by redirecting it elsewhere, so I don't think it's a
sound conclusion that the download.eclipse.org server's redirection is a sufficient source of security because it does not prevent downloading from
a different host.
[1] That seems right. So relying on http redirection is not safe enough because we rely on http in 1st place.
So do I get it right that one approach that would work could be one that was discussed in bug 575688: do not let p2 consume http update-site of any form without user explicitly approving it?
I'm marking bug 575688 as a requirement here, because apparently it is.
Moreover, if we now remove the signing requirement, we allow not only a
man-in-the-middle to yield untrustworthy metadata (as before), we also allow
to install unsigned artifacts that can be substitutions for the original
Eclipse Foundation's build artifacts (a new and not improved source of
insecurity).
I think this can be "verified" post-download and identified as not being original build artifacts from download.eclipse.org. So it doesn't seem to be the exact concern of this bug as it was described.
But as the requirement is pretty vague, it's open to interpretation.
This is the danger of non-experts making security decisions as highlighted
by Mikaël Barbero comments.
Right. And who are the security expert we have at hand to validate the decisions? If we have none, does the WG consider hiring one to audit the proposals, and maybe implement some better solutions?
(In reply to Christoph Laeubrich from comment #29)
That's why I said that redirections is convince (in a way to not break old
http links) but not safe (I don't know if P2 is actually caching the
redirect and then never queries http again or is querying http first on the
next restart again).
Ack, see [1]
But that's the question here, do we want/need this level of security in all
cases e.g. even if the requirement to sign artifacts is removed this does
not mean we enforce removing all signatures.
To come back a bit to the original source that triggers the whole
discussion, we have two cases here:
\
Artifacts compiled from code produced/maintained of an eclipse-project,
and I don't see much issues here in requiring it should be signed (to
prevent what you have described or give some kind of origin indication or
tamper proof).\
Artifacts by some kind of third party project (what of course could be an
eclipse project as well but also an Apache or whatever compatible licensed
project...) that is consumed (and redistributed!) by an eclipse-project
For the second case we have the problem that the current policy require us
to sign the artifact (and creating a new one that would not compare equal to
the original e.g. with hashes) if it is included in an update-site for
sim-rel and that's what
a) requires additional effort
b) has the chance of duplication (as we are not able to match the name+hash
anymore)
c) produces (from my POV) a false assumption of "security" as we literally
only check for license compatibility but (correct me if I'm wrong) do not do
any security-review before allowing a third-party artifact to be included.
So for me the whole point can be reduced to what kind of security we
nee/want for making sure that an artifact not originally produced by sources
of the project itself is not tampered/modified/replaced/... but included in
a sim-rel repo.
+1, that's really what's desired here from maintainers POV.
At this stage, such requirement to go through Orbit when updating a lib in order to remain good enough for SimRel is really something that can make some projects consider whether being in SimRel is worth the effort.
And that brings the question of whether projects should be the ones carrying the burden of SimRel security requirements here, or whether SimRel could take care of implementing the security it wants for 3rd-party artifacts. We could have SimRel adding to the verification/reports steps something that verifies for example that all unsigned artifacts are the same one as we'd get from Maven Central for instance.
So there are more ways to receive metadata content that is not the content
one would expect. I seems like you are saying that even if we use https://download.eclipse.org, we could end up at a different host than
download.eclipse.org, albeit one that supports https and has its own
certificate. Do I read this correctly?
My comment was more focused about http->https redirection. But in general you are right as long as the other server also has a valid certificate for the hostname one would not notice that.
Thats why there are "certificate authorities" that check if you are the "real" owner of a domain before they issue a certificate.
Or is it the case that https necessarily reaches the expected named host?
(I would have assumed this is the case, otherwise I don't see how https
is secure at all.)
This depends a bit on the circumstances. https has several "levels" of peer identities:
no identification at all, only encrypt the channel\
the server identifies with a certificate, and if you are verify that by any mean (e.g trust the issuer or trust this particular certificate) you can be sure that you are talking to the right endpoint\
the client identifies as well and the server can check that only 'trusted' clients can interact with the server
It seems to me the server's redirection doesn't buy us any significant level
of additional security. What about if the underlying frameworks "redirect"
any http URL to an https URL in the code before accessing/fetching the URL's
contents? I.e., ensuring programmatically that only https requests go out
into the network? (Such that existing repositories and marketplace listings
don't stop working.)
That would be the better approach, e.g. we can block http at all or issue a warning dialog if https is not available.
I get the sense that minimally we need to ensure that no http requests for
metadata go out into the internet. Is that sufficient to ensure that the
metadata requested from https://download.eclipse.org really comes from our
download.eclipse.org?
If only trustful certificate authorities are enabled, yes.