Unable to determine a parent commit to compare against in base branch after Squash and Merge


We are using codecov in our CI, lately we stoped seeing the codecov reports on PRs.
Turns out the base report is always missing.
Every PR has to run CI before it could be merged, and as part of CI we are uploading codecov reports. The reports are attached to the branch’s latest commit - the one that is tasted in CI.
We are able to see the reports in the codecov UI as expected, but always there is the “Missing base report” error and there is no commit to compare.

We stoped having reports on the base branch starting 2020-12-02 although we didn’t changed anything.

To merge PRs we are using ‘squash and merge’ strategy. As a result, the commit that is merged to the base branch is different from the commit we submitted report to and missing the report.
I actually don’t understand how it worked previously, since the only commits that we are uploading belongs to feature branches and does not enters the base branch.

Is it a known issue? Is codecov is incompatible with squash? how can we fix it?

Commit SHAs

Please include the commit SHA(s)



CI/CD or Build URL




bash -c 'bash <(curl -s https:///bash) -f go/idps/unit_coverage.txt -f /integration_coverage.txt -F server -C

Codecov Output

Please provide the full output of running the uploader on your CI/CD. This will typically have the Codecov logo as ASCII.

Expected Results

Please provide what you expect to have happened (e.g. a file that has missing coverage on a particular line).

Actual Results

Please provide what actually happened.

Additional Information

Any additional information, configuration or data that might be necessary to reproduce the issue.

@ollana, closing this out as I believe we are handling through other support channels.

@tom Was there any answer to these questions? We recently started using the ‘squash and merge’ strategy on Github and now all of our PRs are missing their base reports. Thanks

@mdemoret-nv can you share the Codecov output? If you are running GitHub Actions, you need to set fetch-depth to > 1 or 0 in the actions/checkout stage.

Sure, here is an example codecov output for a PR I am working on (ignore the -N command arg. Was experimenting):

16:33:50 ++ curl -s https://codecov.io/bash
16:33:50 ++ bash -s -- -F ubuntu18.04,python3.8,cuda11.0,dask -f /tmp/workspace/rapidsai/gpuci/cuml/prb/cuml-gpu-test/CUDA/11.0/GPU_LABEL/gpu-a100/OS/ubuntu18.04/PYTHON/3.8/python/cuml/cuml-dask-coverage.xml -N ''
16:33:50   _____          _
16:33:50  / ____|        | |
16:33:50 | |     ___   __| | ___  ___ _____   __
16:33:50 | |    / _ \ / _` |/ _ \/ __/ _ \ \ / /
16:33:50 | |___| (_) | (_| |  __/ (_| (_) \ V /
16:33:50  \_____\___/ \__,_|\___|\___\___/ \_/
16:33:50                               Bash-20210115-cec3c92
16:33:50 ==> git version 2.29.2 found
16:33:50 ==> curl 7.71.1 (x86_64-conda-linux-gnu) libcurl/7.71.1 OpenSSL/1.1.1h zlib/1.2.11 libssh2/1.9.0 nghttp2/1.41.0
16:33:50 Release-Date: 2020-07-01
16:33:50 Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp scp sftp smb smbs smtp smtps telnet tftp 
16:33:50 Features: AsynchDNS GSS-API HTTP2 HTTPS-proxy IPv6 Kerberos Largefile libz NTLM NTLM_WB SPNEGO SSL TLS-SRP UnixSockets
16:33:50 ==> Jenkins CI detected.
16:33:50     project root: /tmp/workspace/rapidsai/gpuci/cuml/prb/cuml-gpu-test/CUDA/11.0/GPU_LABEL/gpu-a100/OS/ubuntu18.04/PYTHON/3.8/ci/artifacts/cuml/cpu/conda_work
16:33:50 --> token set from env
16:33:50     Yaml found at: codecov.yml
16:33:50     -> Found 1 reports
16:33:50 ==> Detecting git/mercurial file structure
16:33:50 ==> Reading reports
16:33:50     + /tmp/workspace/rapidsai/gpuci/cuml/prb/cuml-gpu-test/CUDA/11.0/GPU_LABEL/gpu-a100/OS/ubuntu18.04/PYTHON/3.8/python/cuml/cuml-dask-coverage.xml bytes=663874
16:33:50 ==> Appending adjustments
16:33:50     https://docs.codecov.io/docs/fixing-reports
16:33:52     + Found adjustments
16:33:52 ==> Gzipping contents
16:33:52         96K	/tmp/codecov.Y8MTnD.gz
16:33:52 ==> Uploading reports
16:33:52     url: https://codecov.io
16:33:52     query: branch=origin%2Fpr%2F3338%2Fmerge&commit=1015a577984df158e0d0fd8e9ac776fa810340c9&build=433&build_url=https%3A%2F%2Fgpuci.gpuopenanalytics.com%2Fjob%2Frapidsai%2Fjob%2Fgpuci%2Fjob%2Fcuml%2Fjob%2Fprb%2Fjob%2Fcuml-gpu-test%2FCUDA%3D11.0%2CGPU_LABEL%3Dgpu-a100%2COS%3Dubuntu18.04%2CPYTHON%3D3.8%2F433%2F&name=&tag=&slug=%2Fopt%2Fconda%2Fenvs%2Frapids%2Fconda-bld%2Fgit_cache%2Fjenkins%2Fworkspace%2Frapidsai%2Fgpuci%2Fcuml%2Fprb%2Fcuml-cpu-cuda-build%2FCUDA%2F11.0&service=jenkins&flags=ubuntu18.04,python3.8,cuda11.0,dask&pr=&job=&cmd_args=F,f,N
16:33:52 ->  Pinging Codecov
16:33:52 https://codecov.io/upload/v4?package=bash-20210115-cec3c92&token=secret&branch=origin%2Fpr%2F3338%2Fmerge&commit=1015a577984df158e0d0fd8e9ac776fa810340c9&build=433&build_url=https%3A%2F%2Fgpuci.gpuopenanalytics.com%2Fjob%2Frapidsai%2Fjob%2Fgpuci%2Fjob%2Fcuml%2Fjob%2Fprb%2Fjob%2Fcuml-gpu-test%2FCUDA%3D11.0%2CGPU_LABEL%3Dgpu-a100%2COS%3Dubuntu18.04%2CPYTHON%3D3.8%2F433%2F&name=&tag=&slug=%2Fopt%2Fconda%2Fenvs%2Frapids%2Fconda-bld%2Fgit_cache%2Fjenkins%2Fworkspace%2Frapidsai%2Fgpuci%2Fcuml%2Fprb%2Fcuml-cpu-cuda-build%2FCUDA%2F11.0&service=jenkins&flags=ubuntu18.04,python3.8,cuda11.0,dask&pr=&job=&cmd_args=F,f,N
16:33:53 ->  Uploading to
16:33:53 https://storage.googleapis.com/codecov/v4/raw/2021-01-21/90E8FDF45022B885DD3241BDCE6BA529/ec2ce30f1753bdb3eaa301c3051757a88cdcd6a1/2968281e-446f-491f-9423-e7af00b42379.txt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=GOOG1EQX6OZVJGHKK3633AAFGLBUCOOATRACRQRQF6HMSMLYUP6EAD6XSWAAY%2F20210121%2FUS%2Fs3%2Faws4_request&X-Amz-Date=20210121T233353Z&X-Amz-Expires=10&X-Amz-SignedHeaders=host&X-Amz-Signature=7f20ab9400425e6fe126a94010498abda2f211b8b1559d897ce0d3c20d1ff4de
16:33:53   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
16:33:53                                  Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 94730    0     0  100 94730      0   656k --:--:-- --:--:-- --:--:--  660k
16:33:53     -> View reports at https://codecov.io/github/rapidsai/cuml/commit/ec2ce30f1753bdb3eaa301c3051757a88cdcd6a1

You can see our PRs that are all missing the base report here: Codecov. Let me know if you need any more info.

@tom any updates? Since I posted this message, we are encountering another issue where all recent commits are saying “Unable to find commit in Github”.

We have never seen these messages before and it is making it difficult to work around my earlier “squash and merge” issue.


1 Like

Hi @mdemoret-nv, I have a suspicion as to why this is happening, but I’d like to try to rule out a few things. If you don’t mind, I would like to see if it was a particular change in the bash uploader that caused this. Would you be able to upload using this in the curl https://raw.githubusercontent.com/codecov/codecov-bash/20201130-cc6d3fe/codecov as opposed to https://codecov.io/bash?

@tom We actually don’t use the bash uploader because it exits with a 500 error on some of our nodes and not others (You can see the error at the end of the build log here). Instead we have been using v2.1.11 of the python uploader which doesn’t run into the same 500 error.

I can try to use that version of the bash uploader if you think it would still be useful. If there is anything else I can try to rule out issues, let me know.

@mdemoret-nv, yes that would be helpful. I can’t track down the 500 as our logs don’t go that far back, but if you do get it again, let me know. Our bash uploader should be more sturdy and reliable.

@tom I was able to do some more digging today and discovered why the bash uploader wasn’t working for us but the python client is. The root cause appears to be the --connect-timeout 2 argument to curl in the bash uploader. We were not aware of this issue but our DNS lookup was taking about ~10 seconds causing the curl command to timeout. This issue was hidden by the bash client due to the line:

"$url/upload/v2?$query&attempt=$i" || echo 'HTTP 500')

We have fixed the issue and reverted back to using the bash uploader and everything is working smoothly now (as far as uploading the reports to CodeCov). It might be a good idea to update the bash uploader to show any curl errors in case others have similar issues.

Thanks for your help on this. We wouldn’t have looked into the issue more if you didn’t suggest trying the bash uploader again. However, we still havent solved the original issue where the base report for PR’s is missing after switching to a “squash and merge” strategy on Github.

Do you have any suggestions on fixing the “squash and merge” issue?

@mdemoret-nv, that’s an incredible catch! I’ll make a note to update the uploader to expose curl issues. Thanks for finding that, I really appreciate it.

Do you have a PR/commit SHA that I can take a look at that has the issue? I don’t have any specific tips, but I can poke the product team with something specific, or find a workaround that fits your needs.

@mdemoret-nv, for me, running tests and uploading coverage on the base branch after every merge solved the issue.

Every time new commit enters to the base branch, CI pipeline is triggered to run on the base branch.
Then the coverage is uploaded for the commit id of the base branch and next PRs are compare to that.

1 Like

This is the PR I have been working on: https://github.com/rapidsai/cuml/pull/3338. If you click on the Codecov report link in the comment, it only shows an error page now:

@tom Please let me know if there is anything else I can help with on my end. Would be great to hear what your product team has to say about this issue.

Thanks for the advice. We have been considering this approach so I am glad to hear it’s a viable method. Our hesitation with going this route is it would add to our already overworked CI system (which currently takes >12 compute hours per commit/PR). We would like to avoid adding an additional CI run who’s only purpose is to re-upload identical coverage with a different commit SHA.

However, this may be the route we have to use.


We have an (undocumented) option in the codecov.yml you can try adding

  allow_coverage_offsets: true

I think if you add a commit to that PR it should resolve. Let me know if it doesn’t.

Something we have been working on is a better squash->merge strategy for users who don’t run CI on the default branch. A possible workaround so you don’t have to run EVERYTHING would be to

  1. Save down the coverage report sent to Codecov using the -q option.
  2. Save that file as an artifact (GitHub Actions has upload-artifact)
  3. Trigger CI on merge, but only to download the above artifact and upload with the -z option.

@tom Thanks for the suggestion. I had noticed that mentioned once in the docs but couldnt find any other reference to it.

Unfortunately I tried that out and didnt see any change. You can view the PR here and the the codecov output is at the bottom of the log here

Do you have any other suggestions as to why this isn’t working for us? The only change I have seen is the Codecov.io reports now show “not found” for the base:


Before this change, a base commit was still shown, even if it was incorrect.

@tom Sorry to keep bombarding you with questions but we keep running into new issues. Lately, some of our builds have failed to upload coverage with a 400 error and the message “Invalid request parameters”. Any ideas? Here is the log:

15:10:57   _____          _
15:10:57  / ____|        | |
15:10:57 | |     ___   __| | ___  ___ _____   __
15:10:57 | |    / _ \ / _` |/ _ \/ __/ _ \ \ / /
15:10:57 | |___| (_) | (_| |  __/ (_| (_) \ V /
15:10:57  \_____\___/ \__,_|\___|\___\___/ \_/
15:10:57                               Bash-20210129-7c25fce
15:10:57 ==> git version 2.30.1 found
15:10:57 ==> curl 7.71.1 (x86_64-conda-linux-gnu) libcurl/7.71.1 OpenSSL/1.1.1j zlib/1.2.11 libssh2/1.9.0 nghttp2/1.43.0
15:10:57 Release-Date: 2020-07-01
15:10:57 Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp scp sftp smb smbs smtp smtps telnet tftp 
15:10:57 Features: AsynchDNS GSS-API HTTP2 HTTPS-proxy IPv6 Kerberos Largefile libz NTLM NTLM_WB SPNEGO SSL TLS-SRP UnixSockets
15:10:57 ==> Jenkins CI detected.
15:10:57     project root: /var/lib/jenkins/workspace/rapidsai/gpuci/cuml/branches/cuml-gpu-build-branch-0.19/CUDA/10.1/ci/artifacts/cuml/cpu/conda_work
15:10:57 --> token set from env
15:10:57 --> slug set from env
15:10:57     Yaml found at: codecov.yml
15:10:57     -> Found 1 reports
15:10:57 ==> Detecting git/mercurial file structure
15:10:57 ==> Appending build variables
15:10:57     + OS
15:10:57     + PYTHON
15:10:57     + CUDA
15:10:57 ==> Reading reports
15:10:57     + /var/lib/jenkins/workspace/rapidsai/gpuci/cuml/branches/cuml-gpu-build-branch-0.19/CUDA/10.1/python/cuml/cuml-dask-coverage.xml bytes=663872
15:10:57 ==> Appending adjustments
15:10:57     https://docs.codecov.io/docs/fixing-reports
15:11:00     + Found adjustments
15:11:00 ==> Gzipping contents
15:11:00         96K	/tmp/codecov.rxHc2R.gz
15:11:00 ==> Uploading reports
15:11:00     url: https://codecov.io
15:11:00     query: branch=origin%2Fbranch-0.19&commit=6dfff668d7f88c254f78d7568d55b3938493d6aa&build=3&build_url=https%3A%2F%2Fgpuci.gpuopenanalytics.com%2Fjob%2Frapidsai%2Fjob%2Fgpuci%2Fjob%2Fcuml%2Fjob%2Fbranches%2Fjob%2Fcuml-gpu-build-branch-0.19%2FCUDA%3D10.1%2F3%2F&name=CUDA%3D10.1%2Cdask&tag=&slug=rapidsai%2Fcuml&service=jenkins&flags=dask&pr=&job=&cmd_args=F,f,n,c,e
15:11:00 ->  Pinging Codecov
15:11:00 https://codecov.io/upload/v4?package=bash-20210129-7c25fce&token=secret&branch=origin%2Fbranch-0.19&commit=6dfff668d7f88c254f78d7568d55b3938493d6aa&build=3&build_url=https%3A%2F%2Fgpuci.gpuopenanalytics.com%2Fjob%2Frapidsai%2Fjob%2Fgpuci%2Fjob%2Fcuml%2Fjob%2Fbranches%2Fjob%2Fcuml-gpu-build-branch-0.19%2FCUDA%3D10.1%2F3%2F&name=CUDA%3D10.1%2Cdask&tag=&slug=rapidsai%2Fcuml&service=jenkins&flags=dask&pr=&job=&cmd_args=F,f,n,c,e
15:11:00 Invalid request parameters
15:11:00 400

If I try to recreate the exact curl call to Codecov’s API, I do not get the same error.