Compare commits

...

69 Commits

Author SHA1 Message Date
Aditya Kulkarni f913c64e6f Zecwallet Lightclient service changes 5 years ago
Marshall Gaucher 50a2667703
Merge pull request #91 from zcash-hackworks/immutable-GetDisplayPrevHash 5 years ago
Larry Ruane e4445ddace fix constant REORG (due to fixed GetDisplayPrevHash()) 5 years ago
Larry Ruane 886250e660 GetDisplayPrevHash() should not change its argument 5 years ago
Marshall Gaucher ac5aa8e42f
Attempting to resolve issue in codecov (#90) 5 years ago
Marshall Gaucher 57b12e5841
Merge pull request #85 from zcash-hackworks/mdr0id-patch-dockerfile-note 5 years ago
Marshall Gaucher 6e2bb5b62b
Merge pull request #87 from zcash-hackworks/add_codecov_ci 5 years ago
mdr0id 2d2007925a Resolve CI bug 1 5 years ago
mdr0id f2c657e877 Resolve typo 2 5 years ago
mdr0id f3023fc5b5 Resolve typo 5 years ago
mdr0id 6f909b1443 add codecov to ci test stage 5 years ago
Marshall Gaucher 12119fa13d
Merge branch 'master' into mdr0id-patch-dockerfile-note 5 years ago
Marshall Gaucher e3a7c58f1d
Merge pull request #86 from zcash-hackworks/mdr0id-patch-dockerfile-add-volumes 5 years ago
Marshall Gaucher 6b04ab1449
Update Dockerfile 5 years ago
Marshall Gaucher caddc14410
Update Dockerfile 5 years ago
Marshall Gaucher 559a04b145
Merge pull request #84 from zcash-hackworks/mdr0id-patch-remove-race-tester 5 years ago
Marshall Gaucher b4ade08c89
Update .gitlab-ci.yml 5 years ago
Marshall Gaucher 4b6b77336a
Merge pull request #79 from zcash-hackworks/create_log_file 5 years ago
Marshall Gaucher e7e200ede3
Merge pull request #83 from zcash-hackworks/docker_patch 5 years ago
mdr0id 1e8cfac8a9 adding patch for dockerfile and according makefile targets 5 years ago
Marshall Gaucher a457821949
Merge pull request #81 from zcash-hackworks/ci_artifact_pass 5 years ago
mdr0id 13280b15e6 Resolve Alpine golang go test -race issue 5 years ago
mdr0id c47132e63b Fix typo in yaml 5 years ago
mdr0id 2f13056825 pass artifacts to test stage and fix race test 5 years ago
Marshall Gaucher 0cc64dd8f1
Merge pull request #73 from LarryRuane/add-tests 5 years ago
Larry Ruane 20763199c1 add bytestring tests 5 years ago
Larry Ruane da2231f423 add missing tests, empty (stubs) for now 5 years ago
Marshall Gaucher 6302175a00
Merge pull request #78 from LarryRuane/fix-length-encoding 5 years ago
Marshall Gaucher bc6e857e72
Merge pull request #80 from rex4539/fix-typos 5 years ago
Dimitris Apostolou e8d93c0687
Fix typos 5 years ago
mdr0id 86b915288c Add initial conditional to create log file if it does not exist 5 years ago
Larry Ruane 20d0a040e3 fix compact size length calculation 5 years ago
Marshall Gaucher 6f01d40f2e
Merge pull request #76 from zcash-hackworks/add_contributing_doc 5 years ago
mdr0id d285e34775 add contributing guide 5 years ago
Marshall Gaucher 57128c12d7
update makefile targets and gitlab-ci.yml for unittest patch (#75) 5 years ago
Marshall Gaucher 094c2f08e4
Merge pull request #74 from zcash-hackworks/mdr0id-patch-ignore-binaries 5 years ago
Marshall Gaucher 9a1b929b1e
Update .gitignore 5 years ago
Marshall Gaucher 5b675f9102
Merge pull request #71 from zcash-hackworks/mdr0id-patch-code-of-conduct 5 years ago
Marshall Gaucher 5b9b54aa50
Merge pull request #70 from zcash-hackworks/add-license-1 5 years ago
Marshall Gaucher 0e96c9d855
Create CODE_OF_CONDUCT.md 5 years ago
Marshall Gaucher 5f37c7ed68
Create LICENSE 5 years ago
Marshall Gaucher 6ac80494ab
Merge pull request #61 from defuse/fix-very-insecure 5 years ago
Marshall Gaucher 7c0883ebfc
Merge pull request #68 from zcash-hackworks/mdr0id-patch-2-ci-fork 5 years ago
Marshall Gaucher 1a24524691
update gitlab.yml 5 years ago
Marshall Gaucher 13245d99ea
Merge pull request #65 from zcash-hackworks/mdr0id-patch-fork-ci-enable 5 years ago
Marshall Gaucher a9a1da015b
Update gitlab-yaml to trigger on forks 5 years ago
Marshall Gaucher b246be3e45
Merge pull request #64 from zcash-hackworks/mdr0id-patch-ci-badge 5 years ago
Marshall Gaucher b141021ac0
add ci status 5 years ago
Taylor Hornby 5224340b92 Make -very-insecure imply 'don't use TLS.' 5 years ago
Marshall Gaucher c1279fa239
Merge pull request #53 from pacu/proto-swift-support 5 years ago
Marshall Gaucher 7e34619fb8
Merge pull request #58 from zcash-hackworks/mdr0id-patch-1 5 years ago
Marshall Gaucher ef2e78e850
Update README.md 5 years ago
Marshall Gaucher 731b3393d9
Merge pull request #57 from zcash-hackworks/update_gitlab_yaml 5 years ago
mdr0id b86dda72e5 update broken gitlab yaml for cgo targets on alpine 5 years ago
Marshall Gaucher c5b37391ed
Merge pull request #43 from mdr0id/remove_0mq 5 years ago
Marshall Gaucher ce12fee640
Merge branch 'master' into remove_0mq 5 years ago
mdr0id ecf43dc353 reorg walk back 11 blocks instead of 10 5 years ago
Marshall Gaucher a91acdbdec
Merge pull request #49 from mdr0id/add_gitlab_ci 5 years ago
mdr0id 4a2d22ca3a Update makefile for broken make target 5 years ago
mdr0id 28ed413092 Updating formatting 5 years ago
mdr0id 42eb73db32 Set height to 0 in error case for corrupted db 5 years ago
mdr0id 148a1da8c7 Make log app name more clear 5 years ago
Francisco Gindre 9f6cf742b6 add empty swift_prefix option value to .proto files 5 years ago
zebambam fc6c2f1342
Merge pull request #51 from zebambam/security_and_usability_fixes 5 years ago
zebambam f1ac9c1337 Added .gitignore, provided command line defaults, made errors more obvious, told users to check the log file after it is initialized when a fatal error occurs, require users to pass -very-insecure=true when not using TLS certificates 5 years ago
mdr0id ea3b3c119f creating initial Makefile for CI/CD 5 years ago
mdr0id f8a0274f7d creating initial CI/CD file 5 years ago
str4d f2aacce0ca
Merge pull request #46 from mdr0id/remove_go_routine_db_write 5 years ago
mdr0id ed4591ecc4 Remove go routine that causes threading issues when writing to local db 5 years ago
  1. 7
      .gitignore
  2. 122
      .gitlab-ci.yml
  3. 53
      CODE_OF_CONDUCT.md
  4. 207
      CONTRIBUTING.md
  5. 102
      Dockerfile
  6. 21
      LICENSE
  7. 103
      Makefile
  8. 41
      README.md
  9. 8
      cmd/ingest/ingest_test.go
  10. 28
      cmd/ingest/main.go
  11. 84
      cmd/server/main.go
  12. 8
      cmd/server/server_test.go
  13. 8
      frontend/frontend_test.go
  14. 8
      frontend/rpc_client.go
  15. 39
      frontend/rpc_test.go
  16. 11
      parser/block.go
  17. 8
      parser/block_header.go
  18. 5
      parser/block_test.go
  19. 45
      parser/internal/bytestring/bytestring.go
  20. 550
      parser/internal/bytestring/bytestring_test.go
  21. 2
      parser/transaction.go
  22. 3
      storage/sqlite3_test.go
  23. 6
      testdata/compact_blocks.json
  24. 2
      walletrpc/compact_formats.proto
  25. 34
      walletrpc/service.proto
  26. 8
      walletrpc/walletrpc_test.go

7
.gitignore

@ -0,0 +1,7 @@
*.conf
*.config
*.log
*.sqlite
*.pem
*.key
*.elf

122
.gitlab-ci.yml

@ -0,0 +1,122 @@
# /************************************************************************
# File: .gitlab-ci.yml
# Author: mdr0id
# Date: 7/16/2019
# Description: Used to setup runners/jobs for lightwalletd
# Usage: Commit source and the pipeline will trigger the according jobs.
#
# Known bugs/missing features:
#
# IMPORTANT NOTE: any job with preceeding '.'' is ignored in pipeline
# ************************************************************************/
image: golang:1.11-alpine
stages:
- build
- test
- deploy
- monitor
before_script:
- apk update && apk add make git gcc musl-dev curl bash
# ************************************************************************/
# BUILD
# ************************************************************************/
.lint-check:
stage: build
script:
- make lint
.build-docs:
stage: build
script:
- made docs
build:build-linux:
stage: build
script:
- make
artifacts:
paths:
- ./server
- ./ingest
.build-windows:
stage: build
script:
- make
.build-mac:
stage: build
script:
- make
# Build against latest Golang
.build-latest:
stage: build
image: golang:latest-alpine
script:
- make
allow_failure: true
# ************************************************************************/
# TEST
# ************************************************************************/
test:test-unittest:
stage: test
dependencies:
- build:build-linux
script:
- make test
after_script:
- bash <(curl -s https://codecov.io/bash) -t $CODECOV_TOKEN
.test:test-race-conditions:
stage: test
dependencies:
- build:build-linux
script:
- make race
allow_failure: true
test:test-coverage:
stage: test
dependencies:
- build:build-linux
script:
- make coverage
- make coverage_report
- make coverage_html
after_script:
- bash <(curl -s https://codecov.io/bash) -t $CODECOV_TOKEN
artifacts:
paths:
- ./coverage.html
# ************************************************************************/
# DEPLOY
# ************************************************************************/
.release-candidate:
stage: deploy
script:
- echo "Generating v0.0.1-rc"
when: manual
.release-production:
stage: deploy
script:
- echo "Generating v0.0.1"
when: manual
# ************************************************************************/
# MONITOR
# ************************************************************************/
.monitor-release:
stage: deploy
script:
- echo "Building docker image for v0.0.0"
- make image
when: manual

53
CODE_OF_CONDUCT.md

@ -0,0 +1,53 @@
# Contributor Code of Conduct
As contributors and maintainers of this project, and in the interest of
fostering an open and welcoming community, we pledge to respect all people who
contribute through reporting issues, posting feature requests, updating
documentation, submitting pull requests or patches, and other activities.
We are committed to making participation in this project a harassment-free
experience for everyone, regardless of level of experience, gender, gender
identity and expression, sexual orientation, disability, personal appearance,
body size, race, ethnicity, age, religion, or nationality.
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery
* Personal attacks
* Trolling or insulting/derogatory comments
* Public or private harassment
* Publishing other's private information, such as physical or electronic
addresses, without explicit permission
* Other unethical or unprofessional conduct
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
By adopting this Code of Conduct, project maintainers commit themselves to
fairly and consistently applying these principles to every aspect of managing
this project. Project maintainers who do not follow or enforce the Code of
Conduct may be permanently removed from the project team.
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community.
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting a project maintainer (see below). All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. Maintainers are
obligated to maintain confidentiality with regard to the reporter of an
incident.
If you wish to contact specific maintainers directly, the following have made
themselves available for conduct issues:
- Marshall Gaucher (marshall@z.cash)
- Larry Ruane (larry@z.cash)
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 1.3.0, available at https://www.contributor-covenant.org/version/1/3/0/code-of-conduct.html
[homepage]: https://www.contributor-covenant.org

207
CONTRIBUTING.md

@ -0,0 +1,207 @@
# Development Workflow
This document describes the standard workflows and terminology for developers at Zcash. It is intended to provide procedures that will allow users to contribute to the open-source code base. Below are common workflows users will encounter:
1. Fork lightwalletd Repository
2. Create Branch
3. Make & Commit Changes
4. Create Pull Request
5. Discuss / Review PR
6. Deploy / Merge PR
Before continuing, please ensure you have an existing GitHub or GitLab account. If not, visit [GitHub](https://github.com) or [GitLab](https://gitlab.com) to create an account.
## Fork Repository
This step assumes you are starting with a new GitHub/GitLab environment. If you have already forked the Lightwalletd repository, please continue to [Create Branch] section. Otherwise, open up a terminal and issue the below commands:
Note: Please replace `your_username`, with your actual GitHub username
```bash
git clone git@github.com:your_username/lightwalletd.git
cd lightwalletd
git remote set-url origin git@github.com:your_username/lightwalletd.git
git remote add upstream git@github.com:zcash-hackworks/lightwalletd.git
git remote set-url --push upstream DISABLED
git fetch upstream
git branch -u upstream/master master
```
After issuing the above commands, your `.git/config` file should look similar to the following:
```bash
[core]
repositoryformatversion = 0
filemode = true
bare = false
logallrefupdates = true
[remote "origin"]
url = git@github.com:your_username/lightwalletd.git
fetch = +refs/heads/*:refs/remotes/origin/*
[branch "master"]
remote = upstream
merge = refs/heads/master
[remote "upstream"]
url = git@github.com:zcash-hackworks/lightalletd.git
fetch = +refs/heads/*:refs/remotes/upstream/*
pushurl = DISABLED
```
This setup provides a single cloned environment to develop for Lightwalletd. There are alternative methods using multiple clones, but this document does not cover that process.
## Create Branch
While working on the Lightwalletd project, you are going to have bugs, features, and ideas to work on. Branching exists to aid these different tasks while you write code. Below are some conventions of branching at Zcash:
1. `master` branch is **ALWAYS** deployable
2. Branch names **MUST** be descriptive:
* General format: `issue#_short_description`
To create a new branch (assuming you are in `lightwalletd` directory):
```bash
git checkout -b [new_branch_name]
```
Note: Even though you have created a new branch, until you `git push` this local branch, it will not show up in your Lightwalletd fork on GitHub (e.g. https://github.com/your_username/lightwalletd)
To checkout an existing branch (assuming you are in `lightwalletd` directory):
```bash
git checkout [existing_branch_name]
```
If you are fixing a bug or implementing a new feature, you likely will want to create a new branch. If you are reviewing code or working on existing branches, you likely will checkout an existing branch. To view the list of current Lightwalletd GitHub issues, click [here](https://github.com/zcash-hackworks/lightwalletd/issues).
## Make & Commit Changes
If you have created a new branch or checked out an existing one, it is time to make changes to your local source code. Below are some formalities for commits:
1. Commit messages **MUST** be clear
2. Commit messages **MUST** be descriptive
3. Commit messages **MUST** be clean (see squashing commits for details)
While continuing to do development on a branch, keep in mind that other approved commits are getting merged into `master`. In order to ensure there are minimal to no merge conflicts, we need `rebase` with master.
If you are new to this process, please sanity check your remotes:
```
git remote -v
```
```bash
origin git@github.com:your_username/lightwalletd.git (fetch)
origin git@github.com:your_username/lightwalletd.git (push)
upstream git@github.com:zcash-hackworks/lightwalletd.git (fetch)
upstream DISABLED (push)
```
This output should be consistent with your `.git/config`:
```bash
[branch "master"]
remote = upstream
merge = refs/heads/master
[remote "origin"]
url = git@github.com:your_username/lightwalletd.git
fetch = +refs/heads/*:refs/remotes/origin/*
[remote "upstream"]
url = git@github.com:zcash-hackworks/lightwalletd.git
fetch = +refs/heads/*:refs/remotes/upstream/*
pushurl = DISABLED
```
Once you have confirmed your branch/remote is valid, issue the following commands (assumes you have **NO** existing uncommitted changes):
```bash
git fetch upstream
git rebase upstream/master
git push -f
```
If you have uncommitted changes, use `git stash` to preserve them:
```bash
git stash
git fetch upstream
git rebase upstream/master
git push -f
git stash pop
```
Using `git stash` allows you to temporarily store your changes while you rebase with `master`. Without this, you will rebase with master and lose your local changes.
Before committing changes, ensure your commit messages follow these guidelines:
1. Separate subject from body with a blank line
2. Limit the subject line to 50 characters
3. Capitalize the subject line
4. Do not end the subject line with a period
5. Wrap the body at 72 characters
6. Use the body to explain *what* and *why* vs. *how*
Once synced with `master`, let's commit our changes:
```bash
git add [files...] # default is all files, be careful not to add unintended files
git commit -m 'Message describing commit'
git push
```
Now that all the files changed have been committed, let's continue to Create Pull Request section.
## Create Pull Request
On your GitHub page (e.g. https://github.com/your_username/lightwalletd), you will notice a newly created banner containing your recent commit with a big green `Compare & pull request`. Click on it.
First, write a brief summary comment for your PR -- this first comment should be no more than a few lines because it ends up in the merge commit message. This comment should mention the issue number preceded by a hash symbol (for example, #2984).
Add a second comment if more explanation is needed. It's important to explain why this pull request should be accepted. State whether the proposed change fixes part of the problem or all of it; if the change is temporary (a workaround) or permanent; if the problem also exists upstream (Bitcoin) and, if so, if and how it was fixed there.
If you click on `Commits`, you should see the diff of that commit; it's advisable to verify it's what you expect. You can also click on the small plus signs that appear when you hover over the lines on either the left or right side and add a comment specific to that part of the code. This is very helpful, as you don't have to tell the reviewers (in a general comment) that you're referring to a certain line in a certain file.
Add comments **before** adding reviewers, otherwise they will get a separate email for each comment you add. Once you're happy with the documentation you've added to your PR, select reviewers along the right side. For a trivial change (like the example here), one reviewer is enough, but generally you should have at least two reviewers, at least one of whom should be experienced. It may be good to add one less experienced engineer as a learning experience for that person.
## Discuss / Review PR
In order to merge your PR with `master`, you will need to convince the reviewers of the intentions of your code.
**IMPORTANT:** If your PR introduces code that does not have existing tests to ensure it operates gracefully, you **MUST** also create these tests to accompany your PR.
Reviewers will investigate your PR and provide feedback. Generally the comments are explicitly requesting code changes or clarifying implementations. Otherwise Reviewers will reply with PR terminology:
> **Concept ACK** - Agree with the idea and overall direction, but have neither reviewed nor tested the code changes.
> **utACK (untested ACK)**- Reviewed and agree with the code changes but haven't actually tested them.
> **Tested ACK** - Reviewed the code changes and have verified the functionality or bug fix.
> **ACK** - A loose ACK can be confusing. It's best to avoid them unless it's a documentation/comment only change in which case there is nothing to test/verify; therefore the tested/untested distinction is not there.
> **NACK** - Disagree with the code changes/concept. Should be accompanied by an explanation.
### Squashing Commits
Before your PR is accepted, you might be requested to squash your commits to clean up the logs. This can be done using the following approach:
```bash
git checkout branch_name
git rebase -i HEAD~4
```
The integer value after `~` represents the number of commits you would like to interactively rebase. You can pick a value that makes sense for your situation. A template will pop-up in your terminal requesting you to specify what commands you would like to do with each prior commit:
```bash
Commands:
p, pick = use commit
r, reword = use commit, but edit the commit message
e, edit = use commit, but stop for amending
s, squash = use commit, but meld into previous commit
f, fixup = like "squash", but discard this commit's log message
x, exec = run command (the rest of the line) using shell
```
Modify each line with the according command, followed by the hash of the commit. For example, if I wanted to squash my last 4 commits into the most recent commit for this PR:
```bash
p 1fc6c95 Final commit message
s 6b2481b Third commit message
s dd1475d Second commit message
s c619268 First commit message
```
```bash
git push origin branch-name --force
```
## Deploy / Merge PR
Once your PR/MR has been properly reviewed, it will be ran in the build pipeline to ensure it is valid to merge with master.
Sometimes there will be times when your PR is waiting for some portion of the above process. If you are requested to rebase your PR, in order to gracefully merge into `master`, please do the following:
```bash
git checkout branch_name
git fetch upstream
git rebase upstream/master
git push -f
```

102
Dockerfile

@ -0,0 +1,102 @@
# /************************************************************************
# File: Dockerfile
# Author: mdr0id
# Date: 9/3/2019
# Description: Used for devs that have not built zcash or lightwalletd on
# on existing system
# USAGE:
#
# To build image: make docker_img
# To run container: make docker_image_run
#
# This will place you into the container where you can run zcashd, zcash-cli,
# lightwalletd ingester, and lightwalletd server etc..
#
# First you need to get zcashd sync to current height on testnet, from outside container:
# make docker_img_run_zcashd
#
# Sometimes you need to manually start zcashd for the first time, from insdie the container:
# zcashd -printtoconsole
#
# Once the block height is atleast 280,000 you can go ahead and start lightwalletd components
# make docker_img_run_lightwalletd_ingest
# make docker_img_run_lightwalletd_insecure_server
#
# If you need a random bash session in the container, use:
# make docker_img_bash
#
# If you get kicked out of docker or it locks up...
# To restart, check to see what container you want to restart via docker ps -a
# Then, docker restart <container id>
# The reattach to it, docker attach <container id>
#
# Known bugs/missing features/todos:
#
# *** DO NOT USE IN PRODUCTION ***
#
# - Create docker-compose with according .env scaffolding
# - Determine librustzcash bug that breaks zcashd alpine builds at runtime
# - Once versioning is stable add config flags for images
# - Add mainnet config once lightwalletd stack supports it
#
# ************************************************************************/
# Create layer in case you want to modify local lightwalletd code
FROM golang:1.11 AS lightwalletd_base
ENV ZCASH_CONF=/root/.zcash/zcash.conf
ENV LIGHTWALLETD_URL=https://github.com/zcash-hackworks/lightwalletd.git
RUN apt-get update && apt-get install make git gcc
WORKDIR /home
# Comment out line below to use local lightwalletd repo changes
RUN git clone ${LIGHTWALLETD_URL}
# To add local changes to container uncomment this line
#ADD . /home
RUN cd ./lightwalletd && make
RUN /usr/bin/install -c /home/lightwalletd/ingest /home/lightwalletd/server /usr/bin/
# Setup layer for zcashd and zcash-cli binary
FROM golang:1.11 AS zcash_builder
ENV ZCASH_URL=https://github.com/zcash/zcash.git
RUN apt-get update && apt-get install \
build-essential pkg-config libc6-dev m4 g++-multilib \
autoconf libtool ncurses-dev unzip git python python-zmq \
zlib1g-dev wget curl bsdmainutils automake python-pip -y
WORKDIR /build
RUN git clone ${ZCASH_URL}
RUN ./zcash/zcutil/build.sh -j$(nproc)
RUN bash ./zcash/zcutil/fetch-params.sh
RUN /usr/bin/install -c /build/zcash/src/zcashd /build/zcash/src/zcash-cli /usr/bin/
# Create layer for lightwalletd and zcash binaries to reduce image size
FROM golang:1.11 AS zcash_runner
ENV ZCASH_CONF=/root/.zcash/zcash.conf
RUN mkdir -p /root/.zcash/ && \
mkdir -p /root/.zcash-params/ && \
mkdir /logs/ && \
mkdir /db/
# Use lightwallet server and ingest binaries from prior layer
COPY --from=lightwalletd_base /usr/bin/ingest /usr/bin/server /usr/bin/
COPY --from=zcash_builder /usr/bin/zcashd /usr/bin/zcash-cli /usr/bin/
COPY --from=zcash_builder /root/.zcash-params/ /root/.zcash-params/
# Configure zcash.conf
RUN echo "testnet=1" >> ${ZCASH_CONF} && \
echo "addnode=testnet.z.cash" >> ${ZCASH_CONF} && \
echo "rpcbind=127.0.0.1" >> ${ZCASH_CONF} && \
echo "rpcport=18232" >> ${ZCASH_CONF} && \
echo "rpcuser=lwd" >> ${ZCASH_CONF} && \
echo "rpcpassword=`head /dev/urandom | tr -dc A-Za-z0-9 | head -c 13 ; echo ''`" >> ${ZCASH_CONF}
VOLUME [/root/.zcash]
VOLUME [/root/.zcash-params]

21
LICENSE

@ -0,0 +1,21 @@
MIT License
Copyright (c) 2019 Electric Coin Company
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

103
Makefile

@ -0,0 +1,103 @@
# /************************************************************************
# File: Makefile
# Author: mdr0id
# Date: 7/16/2019
# Description: Used for local and container dev in CI deployments
# Usage: make <target_name>
#
# Known bugs/missing features:
# 1. make msan is not stable as of 9/20/2019
#
# ************************************************************************/
PROJECT_NAME := "lightwalletd"
GO_FILES := $(shell find . -name '*.go' | grep -v /vendor/ | grep -v '*_test.go')
GO_TEST_FILES := $(shell find . -name '*_test.go' -type f | rev | cut -d "/" -f2- | rev | sort -u)
GO_BUILD_FILES := $(shell find . -name 'main.go')
.PHONY: all dep build clean test coverage coverhtml lint
all: build
# Lint golang files
lint:
@golint -set_exit_status
show_tests:
@echo ${GO_TEST_FILES}
# Run unittests
test:
@go test -v -coverprofile=coverage.txt -covermode=atomic ./...
# Run data race detector
race:
GO111MODULE=on CGO_ENABLED=1 go test -v -race -short ./...
# Run memory sanitizer (need to ensure proper build flag is set)
msan:
@go test -v -msan -short ${GO_TEST_FILES}
# Generate global code coverage report
coverage:
@go test -coverprofile=coverage.out -covermode=atomic ./...
# Generate code coverage report
coverage_report:
@go tool cover -func=coverage.out
# Generate code coverage report in HTML
coverage_html:
@go tool cover -html=coverage.out -o coverage.html
# Generate documents
docs:
@echo "Generating docs..."
# Generate docker image
docker_img:
docker build -t zcash_lwd_base .
# Run the above docker image in a container
docker_img_run:
docker run -i --name zcashdlwd zcash_lwd_base
# Execture a bash process on zcashdlwdcontainer
docker_img_bash:
docker exec -it zcashdlwd bash
# Start the zcashd process in the zcashdlwd container
docker_img_run_zcashd:
docker exec -i zcashdlwd zcashd -printtoconsole
# Stop the zcashd process in the zcashdlwd container
docker_img_stop_zcashd:
docker exec -i zcashdlwd zcash-cli stop
# Start the lightwalletd ingester in the zcashdlwd container
docker_img_run_lightwalletd_ingest:
docker exec -i zcashdlwd ingest --conf-file /root/.zcash/zcash.conf --db-path /db/sql.db --log-file /logs/ingest.log
# Start the lightwalletd server in the zcashdlwd container
docker_img_run_lightwalletd_insecure_server:
docker exec -i zcashdlwd server --very-insecure=true --conf-file /root/.zcash/zcash.conf --db-path /db/sql.db --log-file /logs/server.log --bind-addr 127.0.0.1:18232
# Remove and delete ALL images and containers in Docker; assumes containers are stopped
docker_remove_all:
docker system prune -f
# Get dependencies
dep:
@go get -v -d ./...
# Build binary
build:
GO111MODULE=on CGO_ENABLED=1 go build -i -v ./cmd/ingest
GO111MODULE=on CGO_ENABLED=1 go build -i -v ./cmd/server
# Install binaries into Go path
install:
go install ./...
clean:
@echo "clean project..."
#rm -f $(PROJECT_NAME)

41
README.md

@ -1,3 +1,6 @@
[![pipeline status](https://gitlab.com/mdr0id/lightwalletd/badges/master/pipeline.svg)](https://gitlab.com/mdr0id/lightwalletd/commits/master)
# Overview
[lightwalletd](https://github.com/zcash-hackworks/lightwalletd) is a backend service that provides a bandwidth-efficient interface to the Zcash blockchain. Currently, lightwalletd supports the Sapling protocol version as its primary concern. The intended purpose of lightwalletd is to support the development of mobile-friendly shielded light wallets.
@ -35,13 +38,13 @@ A **compact block** is a collection of compact transactions along with certain m
The ingester is the component responsible for transforming raw Zcash block data into a compact block.
The ingester is a modular component. Anything that can retrieve the necessary data and put it into storage can fulfill this role. Currently, the only ingester available subscribes to a 0MQ feed from zcashd and parses that raw block data. This approach has turned out to be fairly brittle - for instance, zcashd provides no way to resend a block that's been missed without a full resync. It's clear that the 0MQ publisher isn't meant for production use, and we're looking into improvements. Future versions could retrieve information via the zcashd RPC or download pre-parsed blocks from a cloud store.
The ingester is a modular component. Anything that can retrieve the necessary data and put it into storage can fulfill this role. Currently, the only ingester available communicated to zcashd through RPCs and parses that raw block data.
**How do I run it?**
⚠️ This section literally describes how to execute the binaries from source code. This is suitable only for testing, not production deployment. See section Production for cleaner instructions.
⚠️ Bringing up a fresh compact block database can take serveral hours of uninterrupted runtime.
⚠️ Bringing up a fresh compact block database can take several hours of uninterrupted runtime.
First, install [Go >= 1.11](https://golang.org/dl/#stable). Older versions of Go may work but are not actively supported at this time. Note that the version of Go packaged by Debian stable (or anything prior to Buster) is far too old to work.
@ -81,7 +84,9 @@ To see the other command line options, run `go run cmd/server/main.go --help`.
**What should I watch out for?**
Not much! This is a very simple piece of software. Make sure you point it at the same storage as the ingester. See the "Production" section for some caveats.
x509 Certificates! This software relies on the confidentiality and integrity of a modern TLS connection between incoming clients and the front-end. Without an x509 certificate that incoming clients accurately authenticate, the security properties of this software are lost.
Otherwise, not much! This is a very simple piece of software. Make sure you point it at the same storage as the ingester. See the "Production" section for some caveats.
Support for users sending transactions will require the ability to make JSON-RPC calls to a zcashd instance. By default the frontend tries to pull RPC credentials from your zcashd.conf file, but you can specify other credentials via command line flag. In the future, it should be possible to do this with environment variables [(#2)](https://github.com/zcash-hackworks/lightwalletd/issues/2).
@ -97,12 +102,36 @@ It's not necessary to explicitly run anything. Both the ingester and the fronten
**What should I watch out for?**
sqlite is extremely reliable for what it is, but it isn't good at high concurrency. Because sqlite uses a global write lock, the code limits the number of open database connections to *one* and currently makes no distinction betwen read-only (frontend) and read/write (ingester) connections. It will probably begin to exhibit lock contention at low user counts, and should be improved or replaced with your own data store in production.
sqlite is extremely reliable for what it is, but it isn't good at high concurrency. Because sqlite uses a global write lock, the code limits the number of open database connections to *one* and currently makes no distinction between read-only (frontend) and read/write (ingester) connections. It will probably begin to exhibit lock contention at low user counts, and should be improved or replaced with your own data store in production.
## Production
⚠️ This is informational documentation about a piece of alpha software. It has not yet undergone audits or been subject to rigorous testing. It lacks some affordances necessary for production-level reliability. We do not recommend using it to handle customer funds at this time (March 2019).
**x509 Certificates**
You will need to supply an x509 certificate that connecting clients will have good reason to trust (hint: do not use a self-signed one, our SDK will reject those unless you distribute them to the client out-of-band). We suggest that you be sure to buy a reputable one from a supplier that uses a modern hashing algorithm (NOT md5 or sha1) and that uses Certificate Transparency (OID 1.3.6.1.4.1.11129.2.4.2 will be present in the certificate).
To check a given certificate's (cert.pem) hashing algorithm:
```
openssl x509 -text -in certificate.crt | grep "Signature Algorithm"
```
To check if a given certificate (cert.pem) contains a Certificate Transparency OID:
```
echo "1.3.6.1.4.1.11129.2.4.2 certTransparency Certificate Transparency" > oid.txt
openssl asn1parse -in cert.pem -oid ./oid.txt | grep 'Certificate Transparency'
```
To use Let's Encrypt to generate a free certificate for your frontend, one method is to:
1) Install certbot
2) Open port 80 to your host
3) Point some forward dns to that host (some.forward.dns.com)
4) Run
```
certbot certonly --standalone --preferred-challenges http -d some.forward.dns.com
```
5) Pass the resulting certificate and key to frontend using the -tls-cert and -tls-key options.
**Dependencies**
The first-order dependencies of this code are:
@ -121,5 +150,5 @@ lightwalletd currently lacks several things that you'll want in production. Cave
- There are no monitoring / metrics endpoints yet. You're on your own to notice if it goes down or check on its performance.
- Logging coverage is patchy and inconsistent. However, what exists emits structured JSON compatible with various collectors.
- Logging may capture identifiable user data. It hasn't received any privacy analysis yet and makes no attempt at sanitization.
- The only storage provider we've implemented is sqlite. sqlite is [likely not appropriate](https://sqlite.org/whentouse.html) for the number of concurrent requests we expect to handle. Because sqlite uses a global write lock, the code limits the number of open database connections to *one* and currently makes no distinction betwen read-only (frontend) and read/write (ingester) connections. It will probably begin to exhibit lock contention at low user counts, and should be improved or replaced with your own data store in production.
- [Load-balancing with gRPC](https://grpc.io/blog/loadbalancing) may not work quite like you're used to. A full explanation is beyond the scope of this document, but we recommend looking into [Envoy](https://www.envoyproxy.io/), [nginx](https://nginx.com), or [haproxy](https://www.haproxy.org) depending on your existing infrastruture.
- The only storage provider we've implemented is sqlite. sqlite is [likely not appropriate](https://sqlite.org/whentouse.html) for the number of concurrent requests we expect to handle. Because sqlite uses a global write lock, the code limits the number of open database connections to *one* and currently makes no distinction between read-only (frontend) and read/write (ingester) connections. It will probably begin to exhibit lock contention at low user counts, and should be improved or replaced with your own data store in production.
- [Load-balancing with gRPC](https://grpc.io/blog/loadbalancing) may not work quite like you're used to. A full explanation is beyond the scope of this document, but we recommend looking into [Envoy](https://www.envoyproxy.io/), [nginx](https://nginx.com), or [haproxy](https://www.haproxy.org) depending on your existing infrastructure.

8
cmd/ingest/ingest_test.go

@ -0,0 +1,8 @@
package main
import (
"testing"
)
func TestString_read(t *testing.T) {
}

28
cmd/ingest/main.go

@ -72,7 +72,7 @@ func main() {
logger.SetLevel(logrus.Level(opts.logLevel))
log = logger.WithFields(logrus.Fields{
"app": "lightwd",
"app": "lightwalletd",
})
// Initialize database
@ -98,16 +98,7 @@ func main() {
if err != nil {
log.WithFields(logrus.Fields{
"error": err,
}).Warn("zcash.conf failed, will try empty credentials for rpc")
//Default to testnet, but user MUST specify rpcuser and rpcpassword in zcash.conf; no default
rpcClient, err = frontend.NewZRPCFromCreds("127.0.0.1:18232", "", "")
if err != nil {
log.WithFields(logrus.Fields{
"error": err,
}).Fatal("couldn't start rpc connection")
}
}).Fatal("setting up RPC connection to zcashd")
}
ctx := context.Background()
@ -115,16 +106,17 @@ func main() {
if err != nil {
log.WithFields(logrus.Fields{
"error": err,
}).Warn("unable to get current height from local db storage")
}).Warn("unable to get current height from local db storage")
height = 0
}
//ingest from Sapling testnet height
if height < 280000 {
height = 280000
log.WithFields(logrus.Fields{
"error": err,
}).Warn("invalid current height read from local db storage")
}).Warn("invalid current height read from local db storage")
}
timeout_count := 0
@ -135,7 +127,7 @@ func main() {
for {
if reorg_count > 0 {
reorg_count = -1
height -= 10
height -= 11
}
block, err := getBlock(rpcClient, height)
@ -157,8 +149,8 @@ func main() {
if timeout_count > 0 {
timeout_count--
}
phash = hex.EncodeToString(block.GetPrevHash())
//check for reorgs once we have inital block hash from startup
phash = hex.EncodeToString(block.GetDisplayPrevHash())
//check for reorgs once we have initial block hash from startup
if hash != phash && reorg_count != -1 {
reorg_count++
log.WithFields(logrus.Fields{
@ -207,7 +199,7 @@ func getBlock(rpcClient *rpcclient.Client, height int) (*parser.Block, error) {
if err != nil{
return nil, errors.Wrap(err, "error reading JSON response")
}
blockData, err := hex.DecodeString(blockDataHex)
if err != nil {
return nil, errors.Wrap(err, "error decoding getblock output")

84
cmd/server/main.go

@ -5,7 +5,8 @@ import (
"flag"
"net"
"os"
"os/signal"
"fmt"
"os/signal"
"syscall"
"time"
@ -29,9 +30,15 @@ func init() {
DisableLevelTruncation: true,
})
onexit := func () {
fmt.Printf("Lightwalletd died with a Fatal error. Check logfile for details.\n")
}
log = logger.WithFields(logrus.Fields{
"app": "frontend-grpc",
})
logrus.RegisterExitHandler(onexit)
}
// TODO stream logging
@ -75,31 +82,58 @@ func loggerFromContext(ctx context.Context) *logrus.Entry {
}
type Options struct {
bindAddr string `json:"bind_address,omitempty"`
dbPath string `json:"db_path"`
tlsCertPath string `json:"tls_cert_path,omitempty"`
tlsKeyPath string `json:"tls_cert_key,omitempty"`
logLevel uint64 `json:"log_level,omitempty"`
logPath string `json:"log_file,omitempty"`
zcashConfPath string `json:"zcash_conf,omitempty"`
bindAddr string `json:"bind_address,omitempty"`
dbPath string `json:"db_path"`
tlsCertPath string `json:"tls_cert_path,omitempty"`
tlsKeyPath string `json:"tls_cert_key,omitempty"`
logLevel uint64 `json:"log_level,omitempty"`
logPath string `json:"log_file,omitempty"`
zcashConfPath string `json:"zcash_conf,omitempty"`
veryInsecure bool `json:"very_insecure,omitempty"`
}
func fileExists(filename string) bool {
info, err := os.Stat(filename)
if os.IsNotExist(err) {
return false
}
return !info.IsDir()
}
func main() {
opts := &Options{}
flag.StringVar(&opts.bindAddr, "bind-addr", "127.0.0.1:9067", "the address to listen on")
flag.StringVar(&opts.dbPath, "db-path", "", "the path to a sqlite database file")
flag.StringVar(&opts.tlsCertPath, "tls-cert", "", "the path to a TLS certificate (optional)")
flag.StringVar(&opts.tlsKeyPath, "tls-key", "", "the path to a TLS key file (optional)")
flag.StringVar(&opts.dbPath, "db-path", "./database.sqlite", "the path to a sqlite database file")
flag.StringVar(&opts.tlsCertPath, "tls-cert", "./cert.pem", "the path to a TLS certificate")
flag.StringVar(&opts.tlsKeyPath, "tls-key", "./cert.key", "the path to a TLS key file")
flag.Uint64Var(&opts.logLevel, "log-level", uint64(logrus.InfoLevel), "log level (logrus 1-7)")
flag.StringVar(&opts.logPath, "log-file", "", "log file to write to")
flag.StringVar(&opts.zcashConfPath, "conf-file", "", "conf file to pull RPC creds from")
flag.StringVar(&opts.logPath, "log-file", "./server.log", "log file to write to")
flag.StringVar(&opts.zcashConfPath, "conf-file", "./zcash.conf", "conf file to pull RPC creds from")
flag.BoolVar(&opts.veryInsecure, "very-insecure", false, "run without the required TLS certificate, only for debugging, DO NOT use in production")
// TODO prod metrics
// TODO support config from file and env vars
flag.Parse()
if opts.dbPath == "" || opts.zcashConfPath == "" {
flag.Usage()
os.Exit(1)
filesThatShouldExist := []string {
opts.dbPath,
opts.tlsCertPath,
opts.tlsKeyPath,
opts.logPath,
opts.zcashConfPath,
}
for _, filename := range filesThatShouldExist {
if !fileExists(opts.logPath) {
os.OpenFile(opts.logPath, os.O_RDWR|os.O_CREATE|os.O_EXCL, 0666)
}
if (opts.veryInsecure && (filename == opts.tlsCertPath || filename == opts.tlsKeyPath)) {
continue
}
if !fileExists(filename) {
os.Stderr.WriteString(fmt.Sprintf("\n ** File does not exist: %s\n\n", filename))
flag.Usage()
os.Exit(1)
}
}
if opts.logPath != "" {
@ -121,7 +155,9 @@ func main() {
// gRPC initialization
var server *grpc.Server
if opts.tlsCertPath != "" && opts.tlsKeyPath != "" {
if opts.veryInsecure {
server = grpc.NewServer(LoggingInterceptor())
} else {
transportCreds, err := credentials.NewServerTLSFromFile(opts.tlsCertPath, opts.tlsKeyPath)
if err != nil {
log.WithFields(logrus.Fields{
@ -131,9 +167,7 @@ func main() {
}).Fatal("couldn't load TLS credentials")
}
server = grpc.NewServer(grpc.Creds(transportCreds), LoggingInterceptor())
} else {
server = grpc.NewServer(LoggingInterceptor())
}
}
// Enable reflection for debugging
if opts.logLevel >= uint64(logrus.WarnLevel) {
@ -148,15 +182,7 @@ func main() {
if err != nil {
log.WithFields(logrus.Fields{
"error": err,
}).Warn("zcash.conf failed, will try empty credentials for rpc")
rpcClient, err = frontend.NewZRPCFromCreds("127.0.0.1:8232", "", "")
if err != nil {
log.WithFields(logrus.Fields{
"error": err,
}).Warn("couldn't start rpc conn. won't be able to send transactions")
}
}).Fatal("setting up RPC connection to zcashd")
}
// Compact transaction service initialization

8
cmd/server/server_test.go

@ -0,0 +1,8 @@
package main
import (
"testing"
)
func TestString_read(t *testing.T) {
}

8
frontend/frontend_test.go

@ -0,0 +1,8 @@
package frontend
import (
"testing"
)
func TestString_read(t *testing.T) {
}

8
frontend/rpc_client.go

@ -19,13 +19,9 @@ func NewZRPCFromConf(confPath string) (*rpcclient.Client, error) {
username := cfg.Section("").Key("rpcuser").String()
password := cfg.Section("").Key("rpcpassword").String()
return NewZRPCFromCreds(net.JoinHostPort(rpcaddr, rpcport), username, password)
}
func NewZRPCFromCreds(addr, username, password string) (*rpcclient.Client, error) {
// Connect to local zcash RPC server using HTTP POST mode.
// Connect to local Zcash RPC server using HTTP POST mode.
connCfg := &rpcclient.ConnConfig{
Host: addr,
Host: net.JoinHostPort(rpcaddr, rpcport),
User: username,
Pass: password,
HTTPPostMode: true, // Zcash only supports HTTP POST mode

39
frontend/rpc_test.go

@ -1,39 +0,0 @@
package frontend
import (
"encoding/json"
"strconv"
"strings"
"testing"
)
// a well-formed raw transaction
const coinbaseTxHex = "0400008085202f89010000000000000000000000000000000000000" +
"000000000000000000000000000ffffffff03580101ffffffff0200ca9a3b000000001976a9146b" +
"9ae8c14e917966b0afdf422d32dbac40486d3988ac80b2e60e0000000017a9146708e6670db0b95" +
"0dac68031025cc5b63213a4918700000000000000000000000000000000000000"
func TestSendTransaction(t *testing.T) {
client, err := NewZRPCFromCreds("127.0.0.1:8232", "user", "password")
if err != nil {
t.Fatalf("Couldn't init JSON-RPC client: %v", err)
}
params := make([]json.RawMessage, 1)
params[0] = json.RawMessage("\"" + coinbaseTxHex + "\"")
_, err = client.RawRequest("sendrawtransaction", params)
if err == nil {
t.Fatal("somehow succeeded at sending a coinbase tx")
}
errParts := strings.SplitN(err.Error(), ":", 2)
errCode, err := strconv.ParseInt(errParts[0], 10, 64)
if err != nil {
t.Errorf("couldn't parse error code: %v", err)
}
errMsg := strings.TrimSpace(errParts[1])
if errCode != -26 || errMsg != "16: coinbase" {
t.Error("got the wrong errors")
}
}

11
parser/block.go

@ -44,13 +44,14 @@ func (b *Block) GetEncodableHash() []byte {
}
func (b *Block) GetDisplayPrevHash() []byte {
h := b.hdr.HashPrevBlock
rhash := make([]byte, len(b.hdr.HashPrevBlock))
copy(rhash, b.hdr.HashPrevBlock)
// Reverse byte order
for i := 0; i < len(h)/2; i++ {
j := len(h) - 1 - i
h[i], h[j] = h[j], h[i]
for i := 0; i < len(rhash)/2; i++ {
j := len(rhash) - 1 - i
rhash[i], rhash[j] = rhash[j], rhash[i]
}
return h
return rhash
}
func (b *Block) HasSaplingTransactions() bool {

8
parser/block_header.go

@ -66,9 +66,9 @@ func CompactLengthPrefixedLen(val []byte) int {
length := len(val)
if length < 253 {
return 1 + length
} else if length < 0xffff {
} else if length <= 0xffff {
return 1 + 2 + length
} else if length < 0xffff {
} else if length <= 0xffffffff {
return 1 + 4 + length
} else {
return 1 + 8 + length
@ -80,11 +80,11 @@ func WriteCompactLengthPrefixed(buf *bytes.Buffer, val []byte) error {
if length < 253 {
binary.Write(buf, binary.LittleEndian, uint8(length))
binary.Write(buf, binary.LittleEndian, val)
} else if length < 0xffff {
} else if length <= 0xffff {
binary.Write(buf, binary.LittleEndian, byte(253))
binary.Write(buf, binary.LittleEndian, uint16(length))
binary.Write(buf, binary.LittleEndian, val)
} else if length < 0xffff {
} else if length <= 0xffffffff {
binary.Write(buf, binary.LittleEndian, byte(254))
binary.Write(buf, binary.LittleEndian, uint32(length))
binary.Write(buf, binary.LittleEndian, val)

5
parser/block_test.go

@ -42,6 +42,10 @@ func TestBlockParser(t *testing.T) {
t.Error("Read wrong version in a test block.")
break
}
if block.GetVersion() != 4 {
t.Error("Read wrong version in a test block.")
break
}
if block.GetTxCount() < 1 {
t.Error("No transactions in block")
@ -90,6 +94,7 @@ func TestCompactBlocks(t *testing.T) {
type compactTest struct {
BlockHeight int `json:"block"`
BlockHash string `json:"hash"`
PrevHash string `json:"prev"`
Full string `json:"full"`
Compact string `json:"compact"`
}

45
parser/internal/bytestring/bytestring.go

@ -3,7 +3,6 @@
package bytestring
import (
"errors"
"io"
)
@ -45,9 +44,7 @@ func (s *String) Read(p []byte) (n int, err error) {
}
n = copy(p, *s)
if !s.Skip(n) {
return 0, errors.New("unexpected end of bytestring read")
}
s.Skip(n)
return n, nil
}
@ -58,7 +55,11 @@ func (s *String) Empty() bool {
// Skip advances the string by n bytes and reports whether it was successful.
func (s *String) Skip(n int) bool {
return s.read(n) != nil
if len(*s) < n {
return false
}
(*s) = (*s)[n:]
return true
}
// ReadByte reads a single byte into out and advances over it. It reports if
@ -87,6 +88,7 @@ func (s *String) ReadBytes(out *[]byte, n int) bool {
// encoding used for length-prefixing and count values. If the values fall
// outside the expected canonical ranges, it returns false.
func (s *String) ReadCompactSize(size *int) bool {
*size = 0
lenBytes := s.read(1)
if lenBytes == nil {
return false
@ -106,8 +108,10 @@ func (s *String) ReadCompactSize(size *int) bool {
lenLen = 4
minSize = 0x10000
case lenByte == 255:
lenLen = 8
minSize = 0x100000000
// this case is not currently usable, beyond maxCompactSize;
// also, this is not possible if sizeof(int) is 4 bytes
// lenLen = 8; minSize = 0x100000000
return false
}
if lenLen > 0 {
@ -122,7 +126,6 @@ func (s *String) ReadCompactSize(size *int) bool {
if length > maxCompactSize || length < minSize {
return false
}
*size = int(length)
return true
}
@ -131,7 +134,7 @@ func (s *String) ReadCompactSize(size *int) bool {
// length field into out. It reports whether the read was successful.
func (s *String) ReadCompactLengthPrefixed(out *String) bool {
var length int
if ok := s.ReadCompactSize(&length); !ok {
if !s.ReadCompactSize(&length) {
return false
}
@ -148,7 +151,7 @@ func (s *String) ReadCompactLengthPrefixed(out *String) bool {
// signed, and advances over it. It reports whether the read was successful.
func (s *String) ReadInt32(out *int32) bool {
var tmp uint32
if ok := s.ReadUint32(&tmp); !ok {
if !s.ReadUint32(&tmp) {
return false
}
@ -160,7 +163,7 @@ func (s *String) ReadInt32(out *int32) bool {
// signed, and advances over it. It reports whether the read was successful.
func (s *String) ReadInt64(out *int64) bool {
var tmp uint64
if ok := s.ReadUint64(&tmp); !ok {
if !s.ReadUint64(&tmp) {
return false
}
@ -175,7 +178,11 @@ func (s *String) ReadUint16(out *uint16) bool {
if v == nil {
return false
}
*out = uint16(v[0]) | uint16(v[1])<<8
*out = 0
for i := 1; i >= 0; i-- {
*out <<= 8
*out |= uint16(v[i])
}
return true
}
@ -186,7 +193,11 @@ func (s *String) ReadUint32(out *uint32) bool {
if v == nil {
return false
}
*out = uint32(v[0]) | uint32(v[1])<<8 | uint32(v[2])<<16 | uint32(v[3])<<24
*out = 0
for i := 3; i >= 0; i-- {
*out <<= 8
*out |= uint32(v[i])
}
return true
}
@ -197,8 +208,11 @@ func (s *String) ReadUint64(out *uint64) bool {
if v == nil {
return false
}
*out = uint64(v[0]) | uint64(v[1])<<8 | uint64(v[2])<<16 | uint64(v[3])<<24 |
uint64(v[4])<<32 | uint64(v[5])<<40 | uint64(v[6])<<48 | uint64(v[7])<<56
*out = 0
for i := 7; i >= 0; i-- {
*out <<= 8
*out |= uint64(v[i])
}
return true
}
@ -213,6 +227,7 @@ func (s *String) ReadUint64(out *uint64) bool {
func (s *String) ReadScriptInt64(num *int64) bool {
// First byte is either an integer opcode, or the number of bytes in the
// number.
*num = 0
firstBytes := s.read(1)
if firstBytes == nil {
return false

550
parser/internal/bytestring/bytestring_test.go

@ -0,0 +1,550 @@
package bytestring
import (
"bytes"
"testing"
)
func TestString_read(t *testing.T) {
s := String{}
if !(s).Empty() {
t.Fatal("initial string not empty")
}
s = String{22, 33, 44}
if s.Empty() {
t.Fatal("string unexpectedly empty")
}
r := s.read(2)
if len(r) != 2 {
t.Fatal("unexpected string length after read()")
}
if !bytes.Equal(r, []byte{22, 33}) {
t.Fatal("miscompare mismatch after read()")
}
r = s.read(0)
if !bytes.Equal(r, []byte{}) {
t.Fatal("miscompare mismatch after read()")
}
if s.read(2) != nil {
t.Fatal("unexpected successful too-large read()")
}
r = s.read(1)
if !bytes.Equal(r, []byte{44}) {
t.Fatal("miscompare after read()")
}
r = s.read(0)
if !bytes.Equal(r, []byte{}) {
t.Fatal("miscompare after read()")
}
if s.read(1) != nil {
t.Fatal("unexpected successful too-large read()")
}
}
func TestString_Read(t *testing.T) {
s := String{22, 33, 44}
b := make([]byte, 10)
n, err := s.Read(b)
if err != nil {
t.Fatal("Read() failed")
}
if n != 3 {
t.Fatal("Read() returned incorrect length")
}
if !bytes.Equal(b[:3], []byte{22, 33, 44}) {
t.Fatal("miscompare after Read()")
}
// s should now be empty
n, err = s.Read(b)
if err == nil {
t.Fatal("Read() unexpectedly succeeded")
}
if n != 0 {
t.Fatal("Read() failed as expected but returned incorrect length")
}
// s empty, the passed-in slice has zero length is not an error
n, err = s.Read([]byte{})
if err != nil {
t.Fatal("Read() failed")
}
if n != 0 {
t.Fatal("Read() returned non-zero length")
}
// make sure we can advance through string s (this time buffer smaller than s)
s = String{55, 66, 77}
b = make([]byte, 2)
n, err = s.Read(b)
if err != nil {
t.Fatal("Read() failed")
}
if n != 2 {
t.Fatal("Read() returned incorrect length")
}
if !bytes.Equal(b[:2], []byte{55, 66}) {
t.Fatal("miscompare after Read()")
}
// keep reading s, one byte remaining
n, err = s.Read(b)
if err != nil {
t.Fatal("Read() failed")
}
if n != 1 {
t.Fatal("Read() returned incorrect length")
}
if !bytes.Equal(b[:1], []byte{77}) {
t.Fatal("miscompare after Read()")
}
// If the buffer to read into is zero-length...
s = String{88}
n, err = s.Read([]byte{})
if err != nil {
t.Fatal("Read() into zero-length buffer failed")
}
if n != 0 {
t.Fatal("Read() failed as expected but returned incorrect length")
}
}
func TestString_Skip(t *testing.T) {
s := String{22, 33, 44}
b := make([]byte, 10)
if !s.Skip(1) {
t.Fatal("Skip() failed")
}
n, err := s.Read(b)
if err != nil {
t.Fatal("Read() failed")
}
if n != 2 {
t.Fatal("Read() returned incorrect length")
}
if !bytes.Equal(b[:2], []byte{33, 44}) {
t.Fatal("miscompare after Read()")
}
// we're at the end of the string
if s.Skip(1) {
t.Fatal("Skip() unexpectedly succeeded")
}
if !s.Skip(0) {
t.Fatal("Skip(0) failed")
}
}
func TestString_ReadByte(t *testing.T) {
s := String{22, 33}
var b byte
if !s.ReadByte(&b) {
t.Fatal("ReadByte() failed")
}
if b != 22 {
t.Fatal("ReadByte() unexpected value")
}
if !s.ReadByte(&b) {
t.Fatal("ReadByte() failed")
}
if b != 33 {
t.Fatal("ReadByte() unexpected value")
}
// we're at the end of the string
if s.ReadByte(&b) {
t.Fatal("ReadByte() unexpectedly succeeded")
}
}
func TestString_ReadBytes(t *testing.T) {
s := String{22, 33, 44}
var b []byte
if !s.ReadBytes(&b, 2) {
t.Fatal("ReadBytes() failed")
}
if !bytes.Equal(b, []byte{22, 33}) {
t.Fatal("miscompare after ReadBytes()")
}
// s is now [44]
if len(s) != 1 {
t.Fatal("unexpected updated s following ReadBytes()")
}
if s.ReadBytes(&b, 2) {
t.Fatal("ReadBytes() unexpected success")
}
if !s.ReadBytes(&b, 1) {
t.Fatal("ReadBytes() failed")
}
if !bytes.Equal(b, []byte{44}) {
t.Fatal("miscompare after ReadBytes()")
}
}
var readCompactSizeTests = []struct {
s String
ok bool
expected int
}{
/* 00 */ {String{}, false, 0},
/* 01 */ {String{43}, true, 43},
/* 02 */ {String{252}, true, 252},
/* 03 */ {String{253, 1, 0}, false, 0}, // 1 < minSize (253)
/* 04 */ {String{253, 252, 0}, false, 0}, // 252 < minSize (253)
/* 05 */ {String{253, 253, 0}, true, 253},
/* 06 */ {String{253, 255, 255}, true, 0xffff},
/* 07 */ {String{254, 0xff, 0xff, 0, 0}, false, 0}, // 0xffff < minSize
/* 08 */ {String{254, 0, 0, 1, 0}, true, 0x00010000},
/* 09 */ {String{254, 7, 0, 1, 0}, true, 0x00010007},
/* 10 */ {String{254, 0, 0, 0, 2}, true, 0x02000000},
/* 11 */ {String{254, 1, 0, 0, 2}, false, 0}, // > maxCompactSize
/* 12 */ {String{255, 0, 0, 0, 2, 0, 0, 0, 0}, false, 0},
}
func TestString_ReadCompactSize(t *testing.T) {
for i, tt := range readCompactSizeTests {
var expected int
ok := tt.s.ReadCompactSize(&expected)
if ok != tt.ok {
t.Fatalf("ReadCompactSize case %d: want: %v, have: %v", i, tt.ok, ok)
}
if expected != tt.expected {
t.Fatalf("ReadCompactSize case %d: want: %v, have: %v", i, tt.expected, expected)
}
}
}
func TestString_ReadCompactLengthPrefixed(t *testing.T) {
// a stream of 3 bytes followed by 2 bytes into the value variable, v
s := String{3, 55, 66, 77, 2, 88, 99}
v := String{}
// read the 3 and thus the following 3 bytes
if !s.ReadCompactLengthPrefixed(&v) {
t.Fatalf("ReadCompactLengthPrefix failed")
}
if len(v) != 3 {
t.Fatalf("ReadCompactLengthPrefix incorrect length")
}
if !bytes.Equal(v, String{55, 66, 77}) {
t.Fatalf("ReadCompactLengthPrefix unexpected return")
}
// read the 2 and then two bytes
if !s.ReadCompactLengthPrefixed(&v) {
t.Fatalf("ReadCompactLengthPrefix failed")
}
if len(v) != 2 {
t.Fatalf("ReadCompactLengthPrefix incorrect length")
}
if !bytes.Equal(v, String{88, 99}) {
t.Fatalf("ReadCompactLengthPrefix unexpected return")
}
// at the end of the String, another read should return false
if s.ReadCompactLengthPrefixed(&v) {
t.Fatalf("ReadCompactLengthPrefix unexpected success")
}
// this string is too short (less than 2 bytes of data)
s = String{3, 55, 66}
if s.ReadCompactLengthPrefixed(&v) {
t.Fatalf("ReadCompactLengthPrefix unexpected success")
}
}
var readInt32Tests = []struct {
s String
expected int32
}{
// Little-endian (least-significant byte first)
/* 00 */ {String{0, 0, 0, 0}, 0},
/* 01 */ {String{17, 0, 0, 0}, 17},
/* 02 */ {String{0xde, 0x8a, 0x7b, 0x72}, 0x727b8ade},
/* 03 */ {String{0xde, 0x8a, 0x7b, 0x92}, -1837397282}, // signed overflow
/* 04 */ {String{0xff, 0xff, 0xff, 0xff}, -1},
}
var readInt32FailTests = []struct {
s String
}{
/* 00 */ {String{}},
/* 01 */ {String{1, 2, 3}}, // too few bytes (must be >= 4)
}
func TestString_ReadInt32(t *testing.T) {
// create one large string to ensure a sequences of values can be read
var s String
for _, tt := range readInt32Tests {
s = append(s, tt.s...)
}
for i, tt := range readInt32Tests {
var v int32
if !s.ReadInt32(&v) {
t.Fatalf("ReadInt32 case %d: failed", i)
}
if v != tt.expected {
t.Fatalf("ReadInt32 case %d: want: %v, have: %v", i, tt.expected, v)
}
}
if len(s) > 0 {
t.Fatalf("ReadInt32 bytes remaining: %d", len(s))
}
for i, tt := range readInt32FailTests {
var v int32
prevlen := len(tt.s)
if tt.s.ReadInt32(&v) {
t.Fatalf("ReadInt32 fail case %d: unexpected success", i)
}
if v != 0 {
t.Fatalf("ReadInt32 fail case %d: value should be zero", i)
}
if len(tt.s) != prevlen {
t.Fatalf("ReadInt32 fail case %d: some bytes consumed", i)
}
}
}
var readInt64Tests = []struct {
s String
expected int64
}{
// Little-endian (least-significant byte first)
/* 00 */ {String{0, 0, 0, 0, 0, 0, 0, 0}, 0},
/* 01 */ {String{17, 0, 0, 0, 0, 0, 0, 0}, 17},
/* 02 */ {String{0xde, 0x8a, 0x7b, 0x72, 0x27, 0xa3, 0x94, 0x55}, 0x5594a327727b8ade},
/* 03 */ {String{0xde, 0x8a, 0x7b, 0x72, 0x27, 0xa3, 0x94, 0x85}, -8821246380292207906}, // signed overflow
/* 04 */ {String{0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff}, -1},
}
var readInt64FailTests = []struct {
s String
}{
/* 00 */ {String{}},
/* 01 */ {String{1, 2, 3, 4, 5, 6, 7}}, // too few bytes (must be >= 8)
}
func TestString_ReadInt64(t *testing.T) {
// create one large string to ensure a sequences of values can be read
var s String
for _, tt := range readInt64Tests {
s = append(s, tt.s...)
}
for i, tt := range readInt64Tests {
var v int64
if !s.ReadInt64(&v) {
t.Fatalf("ReadInt64 case %d: failed", i)
}
if v != tt.expected {
t.Fatalf("ReadInt64 case %d: want: %v, have: %v", i, tt.expected, v)
}
}
if len(s) > 0 {
t.Fatalf("ReadInt64 bytes remaining: %d", len(s))
}
for i, tt := range readInt64FailTests {
var v int64
prevlen := len(tt.s)
if tt.s.ReadInt64(&v) {
t.Fatalf("ReadInt64 fail case %d: unexpected success", i)
}
if v != 0 {
t.Fatalf("ReadInt32 fail case %d: value should be zero", i)
}
if len(tt.s) != prevlen {
t.Fatalf("ReadInt64 fail case %d: some bytes consumed", i)
}
}
}
var readUint16Tests = []struct {
s String
expected uint16
}{
// Little-endian (least-significant byte first)
/* 00 */ {String{0, 0}, 0},
/* 01 */ {String{23, 0}, 23},
/* 02 */ {String{0xde, 0x8a}, 0x8ade},
/* 03 */ {String{0xff, 0xff}, 0xffff},
}
var readUint16FailTests = []struct {
s String
}{
/* 00 */ {String{}},
/* 01 */ {String{1}}, // too few bytes (must be >= 2)
}
func TestString_ReadUint16(t *testing.T) {
// create one large string to ensure a sequences of values can be read
var s String
for _, tt := range readUint16Tests {
s = append(s, tt.s...)
}
for i, tt := range readUint16Tests {
var v uint16
if !s.ReadUint16(&v) {
t.Fatalf("ReadUint16 case %d: failed", i)
}
if v != tt.expected {
t.Fatalf("ReadUint16 case %d: want: %v, have: %v", i, tt.expected, v)
}
}
if len(s) > 0 {
t.Fatalf("ReadUint16 bytes remaining: %d", len(s))
}
for i, tt := range readUint16FailTests {
var v uint16
prevlen := len(tt.s)
if tt.s.ReadUint16(&v) {
t.Fatalf("ReadUint16 fail case %d: unexpected success", i)
}
if v != 0 {
t.Fatalf("ReadInt32 fail case %d: value should be zero", i)
}
if len(tt.s) != prevlen {
t.Fatalf("ReadUint16 fail case %d: some bytes consumed", i)
}
}
}
var readUint32Tests = []struct {
s String
expected uint32
}{
// Little-endian (least-significant byte first)
/* 00 */ {String{0, 0, 0, 0}, 0},
/* 01 */ {String{23, 0, 0, 0}, 23},
/* 02 */ {String{0xde, 0x8a, 0x7b, 0x92}, 0x927b8ade},
/* 03 */ {String{0xff, 0xff, 0xff, 0xff}, 0xffffffff},
}
var readUint32FailTests = []struct {
s String
}{
/* 00 */ {String{}},
/* 01 */ {String{1, 2, 3}}, // too few bytes (must be >= 4)
}
func TestString_ReadUint32(t *testing.T) {
// create one large string to ensure a sequences of values can be read
var s String
for _, tt := range readUint32Tests {
s = append(s, tt.s...)
}
for i, tt := range readUint32Tests {
var v uint32
if !s.ReadUint32(&v) {
t.Fatalf("ReadUint32 case %d: failed", i)
}
if v != tt.expected {
t.Fatalf("ReadUint32 case %d: want: %v, have: %v", i, tt.expected, v)
}
}
if len(s) > 0 {
t.Fatalf("ReadUint32 bytes remaining: %d", len(s))
}
for i, tt := range readUint32FailTests {
var v uint32
prevlen := len(tt.s)
if tt.s.ReadUint32(&v) {
t.Fatalf("ReadUint32 fail case %d: unexpected success", i)
}
if v != 0 {
t.Fatalf("ReadInt32 fail case %d: value should be zero", i)
}
if len(tt.s) != prevlen {
t.Fatalf("ReadUint32 fail case %d: some bytes consumed", i)
}
}
}
var readUint64Tests = []struct {
s String
expected uint64
}{
// Little-endian (least-significant byte first)
/* 00 */ {String{0, 0, 0, 0, 0, 0, 0, 0}, 0},
/* 01 */ {String{17, 0, 0, 0, 0, 0, 0, 0}, 17},
/* 03 */ {String{0xde, 0x8a, 0x7b, 0x72, 0x27, 0xa3, 0x94, 0x55}, 0x5594a327727b8ade},
/* 04 */ {String{0xde, 0x8a, 0x7b, 0x72, 0x27, 0xa3, 0x94, 0x85}, 0x8594a327727b8ade},
/* 05 */ {String{0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff}, 0xffffffffffffffff},
}
var readUint64FailTests = []struct {
s String
}{
/* 00 */ {String{}},
/* 01 */ {String{1, 2, 3, 4, 5, 6, 7}}, // too few bytes (must be >= 8)
}
func TestString_ReadUint64(t *testing.T) {
// create one large string to ensure a sequences of values can be read
var s String
for _, tt := range readUint64Tests {
s = append(s, tt.s...)
}
for i, tt := range readUint64Tests {
var v uint64
if !s.ReadUint64(&v) {
t.Fatalf("ReadUint64 case %d: failed", i)
}
if v != tt.expected {
t.Fatalf("ReadUint64 case %d: want: %v, have: %v", i, tt.expected, v)
}
}
if len(s) > 0 {
t.Fatalf("ReadUint64 bytes remaining: %d", len(s))
}
for i, tt := range readUint64FailTests {
var v uint64
prevlen := len(tt.s)
if tt.s.ReadUint64(&v) {
t.Fatalf("ReadUint64 fail case %d: unexpected success", i)
}
if v != 0 {
t.Fatalf("ReadInt64 fail case %d: value should be zero", i)
}
if len(tt.s) != prevlen {
t.Fatalf("ReadUint64 fail case %d: some bytes consumed", i)
}
}
}
var readScriptInt64Tests = []struct {
s String
ok bool
expected int64
}{
// Little-endian (least-significant byte first).
/* 00 */ {String{}, false, 0},
/* 01 */ {String{0x4f}, true, -1},
/* 02 */ {String{0x00}, true, 0x00},
/* 03 */ {String{0x51}, true, 0x01},
/* 04 */ {String{0x52}, true, 0x02},
/* 05 */ {String{0x5f}, true, 0x0f},
/* 06 */ {String{0x60}, true, 0x10},
/* 07 */ {String{0x01}, false, 0}, // should be one byte following count 0x01
/* 07 */ {String{0x01, 0xbd}, true, 0xbd},
/* 07 */ {String{0x02, 0xbd, 0xac}, true, 0xacbd},
/* 07 */ {String{0x08, 0xbd, 0xac, 0x12, 0x34, 0x56, 0x78, 0x9a, 0x44}, true, 0x449a78563412acbd},
/* 07 */ {String{0x08, 0xbd, 0xac, 0x12, 0x34, 0x56, 0x78, 0x9a, 0x94}, true, -7738740698046616387},
}
func TestString_ReadScriptInt64(t *testing.T) {
for i, tt := range readScriptInt64Tests {
var v int64
ok := tt.s.ReadScriptInt64(&v)
if ok != tt.ok {
t.Fatalf("ReadScriptInt64 case %d: want: %v, have: %v", i, tt.ok, ok)
}
if v != tt.expected {
t.Fatalf("ReadScriptInt64 case %d: want: %v, have: %v", i, tt.expected, v)
}
// there should be no bytes remaining
if ok && len(tt.s) != 0 {
t.Fatalf("ReadScriptInt64 case %d: stream mispositioned", i)
}
}
}

2
parser/transaction.go

@ -64,7 +64,7 @@ func (tx *txIn) ParseFromSlice(data []byte) ([]byte, error) {
// Txout format as described in https://en.bitcoin.it/wiki/Transaction
type txOut struct {
// Non-negative int giving the number of Satoshis to be transferred
// Non-negative int giving the number of zatoshis to be transferred
Value uint64
// Script. CompactSize-prefixed.

3
storage/sqlite3_test.go

@ -21,6 +21,7 @@ import (
type compactTest struct {
BlockHeight int `json:"block"`
BlockHash string `json:"hash"`
PrevHash string `json:"prev"`
Full string `json:"full"`
Compact string `json:"compact"`
}
@ -66,7 +67,7 @@ func TestSqliteStorage(t *testing.T) {
protoBlock := block.ToCompact()
marshaled, _ := proto.Marshal(protoBlock)
err = StoreBlock(db, height, hash, hasSapling, marshaled)
err = StoreBlock(db, height, test.PrevHash, hash, hasSapling, marshaled)
if err != nil {
t.Error(err)
continue

6
testdata/compact_blocks.json

File diff suppressed because one or more lines are too long

2
walletrpc/compact_formats.proto

@ -1,7 +1,7 @@
syntax = "proto3";
package cash.z.wallet.sdk.rpc;
option go_package = "walletrpc";
option swift_prefix = "";
// Remember that proto3 fields are all optional. A field that is not present will be set to its zero value.
// bytes fields of hashes are in canonical little-endian format.

34
walletrpc/service.proto

@ -27,9 +27,11 @@ message TxFilter {
bytes hash = 3;
}
// RawTransaction contains the complete transaction data.
// RawTransaction contains the complete transaction data. It also optionally includes
// the block height in which the transaction was included
message RawTransaction {
bytes data = 1;
uint64 height = 2;
}
message SendResponse {
@ -40,10 +42,40 @@ message SendResponse {
// Empty placeholder. Someday we may want to specify e.g. a particular chain fork.
message ChainSpec {}
message Empty {}
message LightdInfo {
string version = 1;
string vendor = 2;
bool taddrSupport = 3;
string chainName = 4;
uint64 saplingActivationHeight = 5;
string consensusBranchId = 6; // This should really be u32 or []byte, but string for readability
uint64 blockHeight = 7;
}
message TransparentAddress {
string address = 1;
}
message TransparentAddressBlockFilter {
string address = 1;
BlockRange range = 2;
}
service CompactTxStreamer {
// Compact Blocks
rpc GetLatestBlock(ChainSpec) returns (BlockID) {}
rpc GetBlock(BlockID) returns (CompactBlock) {}
rpc GetBlockRange(BlockRange) returns (stream CompactBlock) {}
// Transactions
rpc GetTransaction(TxFilter) returns (RawTransaction) {}
rpc SendTransaction(RawTransaction) returns (SendResponse) {}
// t-Address support
rpc GetAddressTxids(TransparentAddressBlockFilter) returns (stream RawTransaction) {}
// Misc
rpc GetLightdInfo(Empty) returns (LightdInfo) {}
}

8
walletrpc/walletrpc_test.go

@ -0,0 +1,8 @@
package walletrpc
import (
"testing"
)
func TestString_read(t *testing.T) {
}
Loading…
Cancel
Save