Files
woodpecker/pipeline
Harri Avellan 25842498c6 Prevent leaking goroutines on cancelled steps (#6186)
I've been debugging case where Kubernetes backend agents occasionally are OOMKilled. During the investigation noticed an increasing amount of goroutines on the agent processes which turned out to be multiple goroutines hanging on:
```
runtime.gopark(proc.go:461)
runtime.chanrecv(chan.go:667)
runtime.chanrecv1(chan.go:509)
go.woodpecker-ci.org/woodpecker/v3/pipeline/backend/kubernetes.(*kube).WaitStep(kubernetes.go:325)
go.woodpecker-ci.org/woodpecker/v3/pipeline/runtime.(*Runtime).exec(executor.go:261)
go.woodpecker-ci.org/woodpecker/v3/pipeline/runtime.(*Runtime).execAll.func1.1(executor.go:174)
go.woodpecker-ci.org/woodpecker/v3/pipeline/runtime.(*Runtime).execAll.func1.2(executor.go:200)
runtime.goexit(asm_arm64.s:1268)
go.woodpecker-ci.org/woodpecker/v3/pipeline/runtime.(*Runtime).execAll.func1(executor.go:199)
```

My analysis is that on cancel nothing fires the `finished` channel, as pod is deleted i.e. not updated (there's a registed handler for pod update events). There was already a comment about adding cancellation handler for ctx.Done, so I think this is the proper way (instead of adding event handler for pod deletion).
2026-03-02 20:14:56 +00:00
..
2026-02-13 11:56:43 +01:00
2026-02-13 11:56:43 +01:00
2024-05-24 22:35:04 +02:00
2026-02-13 11:56:43 +01:00
2026-02-13 11:56:43 +01:00