error: unpacking of archive failed on file /usr/include/gtk-3.0/gdk/x11/gdkx11devicemanager.h: cpio: rename failed - No space left on device
The produced Eclipse build is therefore broken.
Please clean up enough space on whichever machines these jobs are running.
Steps to reproduce
Not known if its reproducible and whether it will occur on every Eclipse integration build from now on.
What is the current bug behavior?
The Eclipse build ran out of space
What is the expected correct behavior?
There is enough space to build Eclipse.
Priority
Urgent
High
Medium
Low
Severity
Blocker
Major
Normal
Low
Impact
Eclipse build is broken.
2 of 8 checklist items completed
· Edited
Designs
Child items
...
Show closed items
Linked items
0
Link issues together to show that they're related or that one is blocking others.
Learn more.
Activity
Sort or filter
Newest first
Oldest first
Show all activity
Show comments only
Show history only
Simeon Andreevchanged title from No space left on device during Eclipse integration build to No space left on device during Docker build for Eclipse SDK
changed title from No space left on device during Eclipse integration build to No space left on device during Docker build for Eclipse SDK
Simeon Andreevchanged the descriptionCompare with previous version
@fgurr@mbarbero : I assume this is a trivial task to fix, would it be possible to do this today? SDK builds on Linux are all broken because of broken docker container builds.
AFAICT the job builds 10 different docker images in one run. If it was split into two jobs that build only 5 docker images, the required disk space (during a build) could be reduced. The job is already cleaning up before and after the docker builds.
I see centos7, 8, 9, few ubuntu etc? Is there any documentation which image is configured for which reason? Before splitting all that, shouldn't we simply throw away unused images?
I'm all for reducing images. I've tried to replace centos7 and the only instance left I am aware of is https://github.com/eclipse-platform/eclipse.platform.text/blob/master/Jenkinsfile#L8 as platform.text tests were not working on centos7 last that I tried.
Ubuntu and Suse images are used by https://ci.eclipse.org/releng/job/Start-smoke-tests/ but due to exhausting resources only latest Ubuntu is used.
I'll get rid of Ubuntu 18 and 20 images now and if someone migrates platform.text verification build to Centos8 we can get rid of Centos7 too. It's a whole new story if someone would volunteer to move all verification builds to Cent9 so Cent8 could be dropped too and we have only latest version of all 3 major linux distros.
‘centos-unitpod17-svbrl-3qfnl’ is offlinereleng/centos-unitpod17-svbrl-3qfnl Container jnlp was terminated (Exit Code: 255, Reason: Error)- custom -- running-----Logs-------------- jnlp -- terminated (255)-----Logs-------------Mar 17, 2023 6:51:25 AM hudson.remoting.Engine startEngineINFO: Using Remoting version: 3044.vb_940a_a_e4f72eMar 17, 2023 6:51:25 AM org.jenkinsci.remoting.engine.WorkDirManager initializeWorkDirINFO: Using /home/jenkins/agent/remoting as a remoting work directoryMar 17, 2023 6:51:25 AM org.jenkinsci.remoting.engine.WorkDirManager setupLoggingINFO: Both error and output logs will be printed to /home/jenkins/agent/remotingMar 17, 2023 6:51:25 AM hudson.remoting.jnlp.Main$CuiListener statusINFO: Locating server among [http://jenkins-ui.releng.svc.cluster.local/releng/]Mar 17, 2023 6:51:45 AM hudson.remoting.jnlp.Main$CuiListener errorSEVERE: Failed to connect to http://jenkins-ui.releng.svc.cluster.local/releng/tcpSlaveAgentListener/: jenkins-ui.releng.svc.cluster.localjava.io.IOException: Failed to connect to http://jenkins-ui.releng.svc.cluster.local/releng/tcpSlaveAgentListener/: jenkins-ui.releng.svc.cluster.local at org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver.resolve(JnlpAgentEndpointResolver.java:214) at hudson.remoting.Engine.innerRun(Engine.java:744) at hudson.remoting.Engine.run(Engine.java:543)Caused by: java.net.UnknownHostException: jenkins-ui.releng.svc.cluster.local at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:229) at java.base/java.net.Socket.connect(Socket.java:609) at java.base/sun.net.NetworkClient.doConnect(NetworkClient.java:177) at java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:507) at java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:602) at java.base/sun.net.www.http.HttpClient.<init>(HttpClient.java:275) at java.base/sun.net.www.http.HttpClient.New(HttpClient.java:374) at java.base/sun.net.www.http.HttpClient.New(HttpClient.java:395) at java.base/sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1253) at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1187) at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1081) at java.base/sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:1015) at org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver.resolve(JnlpAgentEndpointResolver.java:211) ... 2 more
This issue of randomly failing to connect is with us for months. @fgurr IIRC you mentioned some updates are supposed to fix it, right? Are they performed?
This issue of randomly failing to connect is with us for months. @fgurr IIRC you mentioned some updates are supposed to fix it, right? Are they performed?
Jenkins and Kubernetes Plugin were updated on Releng Jenkins instance, but it seems like it did not help much.
@slakkimsetti So docker images build on releng JIPP are not generally usable like I tried but only for smoke tests? If that's the case centos7 is smth I should drop too, right?
@fgurr Is there supposed to be generally usable centos-9 label to run builds on ?
@slakkimsetti So docker images build on releng JIPP are not generally usable like I tried but only for smoke tests? If that's the case centos7 is smth I should drop too, right?
Console log shows log from all the 10 docker build commands. It will be difficult to identify the exact problem. Individual logs are archived and available as build artifacts. If there is any error please go through them. they are separated per docker image.
@fgurr Is there supposed to be generally usable centos-9 label to run builds on ?
No, and there are no plans to make a CentOS 9 image available as a default pod template. We are aiming to switch to Ubuntu-based images (see #1623), it just got buried in the backlog.