Upgrading gitea did not fix the issue, the bug does not seem to originate with Gitea. We just changed our Redis persistence settings, hopefully that was the problem.
Upgrading gitea did not fix the issue, the bug does not seem to originate with Gitea. We just changed our Redis persistence settings, hopefully that was the problem.
The core problem seems to have been Redis persistence. The default policy is to read+write the entire Redis DB to disk every 5 minutes, if more than 10 keys change. Since the explorer set more than 10 keys per 5 minutes, and didn't expire keys by default (now it does), it was reading+writing 3.0GB to disk every 5 minutes, which used up all our disk IO operations on the VM.
High load was a red herring, this was the kernel doing a "busy wait" looking for disk operations to finish.
The core problem seems to have been Redis persistence. The default policy is to read+write the entire Redis DB to disk every 5 minutes, if more than 10 keys change. Since the explorer set more than 10 keys per 5 minutes, and didn't expire keys by default (now it does), it was reading+writing 3.0GB to disk every 5 minutes, which used up all our disk IO operations on the VM.
High load was a red herring, this was the kernel doing a "busy wait" looking for disk operations to finish.
This looks very similar to the issues I am seeing:
https://github.com/go-gitea/gitea/issues/10661
Upgrading gitea did not fix the issue, the bug does not seem to originate with Gitea. We just changed our Redis persistence settings, hopefully that was the problem.
The core problem seems to have been Redis persistence. The default policy is to read+write the entire Redis DB to disk every 5 minutes, if more than 10 keys change. Since the explorer set more than 10 keys per 5 minutes, and didn't expire keys by default (now it does), it was reading+writing 3.0GB to disk every 5 minutes, which used up all our disk IO operations on the VM.
High load was a red herring, this was the kernel doing a "busy wait" looking for disk operations to finish.