When this code is changed, we need a good way to test it. I will describe one way below, maybe we can come up with something better.
When testing this code, begin rescanning an old wallet that needs to download lots of blocks. Look at STDOUT to see which server SDL is actually connected to, and bring that server down. If the new code works successfully, SDL should switch to the other functional server. This will need coordination across devs if the person testing doesn't control the backend server that SDL is connecting to.
The above way of testing means bringing down a production server, which is not ideal. Another way would be : spin up two testing SDL backends and comment out all production servers and tell SDL internals about the two testing servers. Then one of the testing servers can be brought down and see if the code switches correctly.
When this code is changed, we need a good way to test it. I will describe one way below, maybe we can come up with something better.
When testing this code, begin rescanning an old wallet that needs to download lots of blocks. Look at STDOUT to see which server SDL is actually connected to, and bring that server down. If the new code works successfully, SDL should switch to the other functional server. This will need coordination across devs if the person testing doesn't control the backend server that SDL is connecting to.
The above way of testing means bringing down a production server, which is not ideal. Another way would be : spin up two testing SDL backends and comment out all production servers and tell SDL internals about the two testing servers. Then one of the testing servers can be brought down and see if the code switches correctly.
Please refer to my comment here as well: #116, by restoring from seed phrase I feel like it tries to sync blocks from all servers at the same time, my STDOUT acts a little bit crazy and the fact that I was connected to all my servers at the same minute makes me believe something is wrong.
Please refer to my comment here as well: https://git.hush.is/hush/SilentDragonLite/issues/116#issuecomment-4150, by restoring from seed phrase I feel like it tries to sync blocks from all servers at the same time, my STDOUT acts a little bit crazy and the fact that I was connected to all my servers at the same minute makes me believe something is wrong.
@onryo I pushed some potential fixes just now. I will also do some testing
I can only confirm that it fixed multiple connections and it feels like it was before, however if server becomes unavailable during loading blocks we, as before, see a rarely encountered error: Unexpected compression flag: 60.
> @onryo I pushed some potential fixes just now. I will also do some testing
I can only confirm that it fixed multiple connections and it feels like it was before, however if server becomes unavailable during loading blocks we, as before, see a rarely encountered error: `Unexpected compression flag: 60`.
@onryo if you can provide some STDOUT output from when you get "Unexpected compression flag" that might help debug. We should see it attempt to change servers
@onryo if you can provide some STDOUT output from when you get "Unexpected compression flag" that might help debug. We should see it attempt to change servers
thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value: RecvError', /home/user/.cargo/git/checkouts/silentdragonlite-cli-13034352649a6f08/0181b16/lib/src/grpcconnector.rs:122:44
stack backtrace:
Error fetching blocks Status { code: Internal, message: "Unexpected compression flag: 60" }
0: rust_begin_unwind
at /rustc/fe5b13d681f25ee6474be29d748c65adcd91f69e/library/std/src/panicking.rs:584:5
1: core::panicking::panic_fmt
at /rustc/fe5b13d681f25ee6474be29d748c65adcd91f69e/library/core/src/panicking.rs:143:14
2: core::result::unwrap_failed
at /rustc/fe5b13d681f25ee6474be29d748c65adcd91f69e/library/core/src/result.rs:1785:5
3: <F as threadpool::FnBox>::call_box
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
```
thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value: RecvError', /home/user/.cargo/git/checkouts/silentdragonlite-cli-13034352649a6f08/0181b16/lib/src/grpcconnector.rs:122:44
stack backtrace:
Error fetching blocks Status { code: Internal, message: "Unexpected compression flag: 60" }
0: rust_begin_unwind
at /rustc/fe5b13d681f25ee6474be29d748c65adcd91f69e/library/std/src/panicking.rs:584:5
1: core::panicking::panic_fmt
at /rustc/fe5b13d681f25ee6474be29d748c65adcd91f69e/library/core/src/panicking.rs:143:14
2: core::result::unwrap_failed
at /rustc/fe5b13d681f25ee6474be29d748c65adcd91f69e/library/core/src/result.rs:1785:5
3: <F as threadpool::FnBox>::call_box
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
```
@onryo OK, the above output means that the function ConnectionLoader::ShowProgress() in src/connection.cpp is causing the problem. It runs the "syncstatus" RPC which comes back with an error, and that is what prints Sync error Error with get_address_txids runtime Status { code: Internal, message: "Unexpected compression flag: 60" }
@onryo OK, the above output means that the function ConnectionLoader::ShowProgress() in src/connection.cpp is causing the problem. It runs the "syncstatus" RPC which comes back with an error, and that is what prints `Sync error Error with get_address_txids runtime Status { code: Internal, message: "Unexpected compression flag: 60" }`
@onryo now that #120 is fixed we should be able to continue debugging this. If/when you paste any STDOUT here, I don't need rust backtraces, just the normal SDL STDOUT messages.
@onryo now that #120 is fixed we should be able to continue debugging this. If/when you paste any STDOUT here, I don't need rust backtraces, just the normal SDL STDOUT messages.
@onryo now that #120 is fixed we should be able to continue debugging this. If/when you paste any STDOUT here, I don't need rust backtraces, just the normal SDL STDOUT messages.
Similar to #116, it will be useful for users with slow internet connection who restore old wallets.
Executor::run() in src/connection.cpp needs to be updated to retry a new server, which should fix this problem
When this code is changed, we need a good way to test it. I will describe one way below, maybe we can come up with something better.
When testing this code, begin rescanning an old wallet that needs to download lots of blocks. Look at STDOUT to see which server SDL is actually connected to, and bring that server down. If the new code works successfully, SDL should switch to the other functional server. This will need coordination across devs if the person testing doesn't control the backend server that SDL is connecting to.
The above way of testing means bringing down a production server, which is not ideal. Another way would be : spin up two testing SDL backends and comment out all production servers and tell SDL internals about the two testing servers. Then one of the testing servers can be brought down and see if the code switches correctly.
Please refer to my comment here as well: #116, by restoring from seed phrase I feel like it tries to sync blocks from all servers at the same time, my STDOUT acts a little bit crazy and the fact that I was connected to all my servers at the same minute makes me believe something is wrong.
@onryo I pushed some potential fixes just now. I will also do some testing
I can only confirm that it fixed multiple connections and it feels like it was before, however if server becomes unavailable during loading blocks we, as before, see a rarely encountered error:
Unexpected compression flag: 60
.@onryo if you can provide some STDOUT output from when you get "Unexpected compression flag" that might help debug. We should see it attempt to change servers
Please have a look. @duke
@onryo OK, the above output means that the function ConnectionLoader::ShowProgress() in src/connection.cpp is causing the problem. It runs the "syncstatus" RPC which comes back with an error, and that is what prints
Sync error Error with get_address_txids runtime Status { code: Internal, message: "Unexpected compression flag: 60" }
@onryo now that #120 is fixed we should be able to continue debugging this. If/when you paste any STDOUT here, I don't need rust backtraces, just the normal SDL STDOUT messages.
It was closed by mistake when making a release.