if err ≠ nil

a blog about my projects, Kubernetes and Go by Alexej Kubarev

Handling XML-RPC communication in Go

•••

While working on one of my smaller projects I came across a need to make XML-RPC requests. After checking my calendar and ensuring that I did not go back to ‘98, I’ve set of in my search.

I was not very surprised that there was hardly any information or projects that would support XML-RPC in Go, especially given how ancient XML-RPC actually is.

My search results did reveal a few options, but they were no longer maintained and had somewhat quirky APIs. This got me thinking: XML-RPC in it’s nutshell is very simple, and it should be fairly easy to make a library to support my needs (and it’s a fun project).

And so I set off to create yet another http router XML-RPC client library for Go.

Design

For this client library, I had a few requirements for how I would want this library to work:

  1. It should provide a Client with a simple interface to make RPC calls.
  2. It should support handling all data structures that XML-RPC spec supports.
  3. Responses should be decodable to native Go data-types.
  4. It should be possible to use pointers in both method arguments and method responses.

Results of my tinkering around is alexejk.io/go-xmlrpc library ( Github).

Usage

To use this library, one must naturally add it to the project:

go get -u alexejk.io/go-xmlrpc

Then we can initialize the client with NewClient(endpoint string) (*Client, error) and make RPC calls with Call(method string, args interface{}, reply interface{}) error as shown in example below.

package main

import(
    "fmt"

    "alexejk.io/go-xmlrpc"
)

func main() {
    client, _ := xmlrpc.NewClient("https://bugzilla.mozilla.org/xmlrpc.cgi")

    result := &struct {
        Bugzilla struct {
            Version string
        }
    }{}

    _ = client.Call("Bugzilla.version", nil, result)
    fmt.Printf("Version: %s\n", result.Bugzilla.Version)
}

The small code above will make an XML-RPC call to Mozilla’s Bugzilla API and request an RPC method Bugzilla.version. As this method accepts no arguments, second paramter to Call is nil.

Server response is decoded into result struct. Not unlike how xml.Unmarshal works. If there are any mismatches between response XML and provided data type, library will return an appropriate error such as type 'SomeType' cannot be assigned a value of type 'string'.

Behind the scenes

I’ve decided to base the Client API on rpc.Client from net/rpc. This ensured a nice and familiar API.

I’ve taken inspiration from both net/rpc/jsonrpc as well as encoding/json and encoding/xml packages when writing encoder & decoder for the wire format. This means quite a bit of reflection is used behind the scenes. However, given the nature of this package and what it makes possible to do - I do not see this as a drawback.

What’s next

Overall I’ve had a lot of fun writing this library and it helped me get to know reflect package much better.

I’m using this library myself in another project (will post about later) and am quite satisfied with how it works. It’s probably still rough around the edges here and there - so any PRs are more than welcome.

Thanks for reading!


Building & Releasing Go application with GitHub Actions

•••

GitHub released Actions a while ago, but I haven’t had a good chance to try it out until recently.

While still somewhat rough around the edges, having limitations like it being not possible to trigger an action by a click of a button (like other CI servers) - I still find them quite powerful.

By utilizing GitHub Actions, I was able to make full use of Git to make a packaged release with release notes, attached to them and pre-built artifacts uploaded.

There are many ways one could make it work, but this is how I’ve achieved this setup.

Project structure

A typical project for me has the following structure (some irrelevant files/folders ommited):

├── build/          # Ignored from git, contains build artifacts
├── CHANGELOG.md    # Changelog file containing every version's release notes
├── Dockerfile      # Dockerfile used to build the app inside of docker container
├── go.mod
├── go.sum
├── hack/           # Support scripts (we will get to them later)
├── main.go
...
└── Makefile

Versioning

I wanted to make as few operations as possible when making a release. For versioning of my project I’ve decided to go with Git tags, following semantic versioning.

When deciding which version the binary should have, following set of rules are applied:

  • If current commit is same as a latest tag - the tag name is chosen, e.g v1.0.2
  • If current commit does not match latest tag - a tag name with appended commit is used, e.g v1.0.2+a3dc218
  • If no tags have been created - current commit is appended to v0.0.0, e.g v0.0.0+a3dc218

A simple shell script (hack/version.sh) helps with this:

LATEST_TAG_REV=$(git rev-list --tags --max-count=1)
LATEST_COMMIT_REV=$(git rev-list HEAD --max-count=1)

if [ -n "$LATEST_TAG_REV" ]; then
    LATEST_TAG=$(git describe --tags "$(git rev-list --tags --max-count=1)")
else
    LATEST_TAG="v0.0.0"
fi

if [ "$LATEST_TAG_REV" != "$LATEST_COMMIT_REV" ]; then
    echo "$LATEST_TAG+$(git rev-list HEAD --max-count=1 --abbrev-commit)"
else
    echo "$LATEST_TAG"
fi

Building a project

I use make to build my Go projects, and also to start docker build process (make is also used inside of the container). When building the project, I also want to inject the version string into a binary (this way myapp --version can respond with the version info).

This is achieved with a Makefile that can look something like this:

APP_VERSION=$(shell hack/version.sh)
GO_BUILD_CMD= CGO_ENABLED=0 go build -ldflags="-X main.appVersion=$(APP_VERSION)"

BINARY_NAME=my-app
BUILD_DIR=build

.PHONY: all
all: clean lint test build-all package-all

.PHONY: lint
lint:
	@echo "Linting code..."
	@go vet ./...

.PHONY: test
test:
	@echo "Running tests..."
	@go test ./...

.PHONY: pre-build
pre-build:
	@mkdir -p $(BUILD_DIR)

.PHONY: build-linux
build-linux: pre-build
	@echo "Building Linux binary..."
	GOOS=linux GOARCH=amd64 $(GO_BUILD_CMD) -o $(BUILD_DIR)/$(BINARY_NAME)-linux-amd64

.PHONY: build-osx
build-osx: pre-build
	@echo "Building OSX binary..."
	GOOS=darwin GOARCH=amd64 $(GO_BUILD_CMD) -o $(BUILD_DIR)/$(BINARY_NAME)-darwin-amd64

.PHONY: build build-all
build-all: build-linux build-osx

.PHONY: package-linux
package-linux:
	@echo "Packaging Linux binary..."
	tar -C $(BUILD_DIR) -zcf $(BUILD_DIR)/$(BINARY_NAME)-$(APP_VERSION)-linux-amd64.tar.gz $(BINARY_NAME)-linux-amd64

.PHONY: package-osx
package-osx:
	@echo "Packaging OSX binary..."
	tar -C $(BUILD_DIR) -zcf $(BUILD_DIR)/$(BINARY_NAME)-$(APP_VERSION)-darwin-amd64.tar.gz $(BINARY_NAME)-darwin-amd64

.PHONY: package-all
package-all: package-linux package-osx

.PHONY: docker
docker:
	docker build --force-rm -t $(BINARY_NAME) .

.PHONY: build-in-docker
build-in-docker: docker
	docker rm -f $(BINARY_NAME) || true
	docker create --name $(BINARY_NAME) $(BINARY_NAME)
	docker cp '$(BINARY_NAME):/opt/' $(BUILD_DIR)
	docker rm -f $(BINARY_NAME)

.PHONY: clean
clean:
	@echo "Cleaning..."
	@rm -Rf $(BUILD_DIR)

Now it’s simply a matter of running make all to produce binaries for OSX and Linux and package them in .tar.gz files.

One could also run make build-in-docker to run everything in Docker container and then copy results out. This is the command we will be using to build our project with Github Actions, as it allows us to ensure additional required tooling can be installed without poluting the builder system.

Following Dockerfile is enough to make it work:

FROM golang:1.13-alpine

RUN apk --no-cache add alpine-sdk
WORKDIR /src

# Copy over dependency file and download it if files changed
# This allows build caching and faster re-builds
COPY go.mod  .
COPY go.sum  .
RUN go mod download

# Add rest of the source and build
COPY . .
RUN make all

# Copy to /opt/ so we can extract files later
RUN cp build/* /opt/

Action - Build

Now it’s time to setup a build workflow for our project. Workflows for GitHub Actions are placed in .github/workflows folder. I want my builds to run on both push to master as well as pushes to PRs that are targeting master.

My .github/workflows/build.yml would look like this:

name: Build

on:
  push:
    branches:
      - master
  pull_request:
    branches:
      - master

jobs:
  build:
    name: Build on push
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@master

      - name: Build project
        run: |
          make build-in-docker

The workflow we have simply has two steps:

  1. Checkout code
  2. Build with make build-in-docker

Great, now we have validation of all PRs. It’s also a good idea to require status checks to pass before PR can be merged).

Action - Release

For our release, I want to create a new release and upload both a changelog and packaged artifacts to the GitHub release page of my projects.

While GitHub provides official create-release and upload-release-asset steps, I found them very limiting, especially the upload-release-asset action which required exact name to be specified and only one file upload per step. This can quickly become a nightmare to handle if I want to include version name in the filename and support cross-platform builds.

Luckily, there is a community provided softprops/action-gh-release action which cobines both creation and asset upload steps and supports glob matching for files. To make things even better, it can read the contents of Release “body” (release notes) from a file in my workspace.

Putting it to work, my .github/workflows/release.yml workflow looks like this:

name: Release

on:
  push:
    tags:
      - 'v*'

jobs:
  build:
    name: Create Release
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@master

      - name: Build project
        run: |
          make build-in-docker

      - name: Generate Changelog
        run: |
          VERSION=$(hack/version.sh)
          hack/changelog.sh $VERSION > build/$-CHANGELOG.md

      - name: Create GitHub Release
        uses: softprops/action-gh-release@v1
        with:
          body_path: build/$-CHANGELOG.md
          files: build/my-app-*.tar.gz
        env:
          GITHUB_TOKEN: $

This workflow will only trigger on tags that start with v, e.g v1.2.3, and will do the following:

  1. Checkout code
  2. Build it with make in docker
  3. Run a script to generate changelog for the version we are building and save it
  4. Create a GitHub release for this tag, use generated changelog for release notes and upload all archive files from build directory.

Note: You do not need to configure the GITHUB_TOKEN secret as it's automatically injected for you by runner agent.

Nice, clean and easy !

Bonus - Changelog

I’ve mentioned I had entire changelog in one file - CHANGELOG.md, and you can also see the generation step calling hack/changelog.sh.

Here is how they look like:

## 1.1.0

### Improvements

* Improved failure handling
* Reduced execution time for all operations by 10%

### Bug fixes

* Panic caused by renewal of credentials (#38)
* Certificate name does not follow standards (#32)

## 1.0.0

This is the initial release.

### Known issues

* Failure handling is far from ideal
* Operations can take long time to complete

#!/bin/sh

MARKER_PREFIX="##"
VERSION=$(echo "$1" | sed 's/^v//g')

IFS=''
found=0

cat CHANGELOG.md | while read "line"; do

    # If not found and matching heading
    if [ $found -eq 0 ] && echo "$line" | grep -q "^$MARKER_PREFIX $VERSION$"; then
        found=1
        continue
    fi

    # If needed version if found, and reaching next delimter - stop
    if [ $found -eq 1 ] && echo "$line" | grep -q -E "^$MARKER_PREFIX [[:digit:]]+\.[[:digit:]]+\.[[:digit:]]+"; then
        found=0
        break
    fi

    # Keep printing out lines as no other version delimiter found
    if [ $found -eq 1 ]; then
        echo "$line"
    fi
done

By running the hack/changelog.sh v1.1.0 we would get only the relevant changelog for that version..

Conclusion

I find GitHub Actions great for simpler projects and they provide a lot of flexibility where traditionally we had to use TravisCI or Jenkins. With Actions being extendable, we can most likely fullfil more complex scenarios too, but this remains to be seen.

GitHub is being very generous by supporting both Public and Private repos for free with Actions (with generous limits on private ones before payment is required, unlike others) and we can also bring our own workers.. (Wait, can I run this on my home server?!).


GitHub Pages & Go vanity urls

•••

When writing Go code you almost certainly import some kind of third-party package. Commonly, such third-party packages are coming from GitHub and are added with their full URL to Github project, like so:

import (
    "github.com/octocat/phantom"
)

When you are writing your own packages and/ or libraries, it can become both repetitive, long and limiting to be refering to these packages via github.com/<user>/<package>, especially if both your username and package names are long. Additionally, if you choose to move from GitHub to let’s say GitLab or Bitbucket - you have to change the imports in all of your projects.

And what if your projects are used by others? Then you have just broken everything for them - remember this nasty issue with github.com/sirupsen/logrus rename (a simple case change in username)?

Wouldn’t it be so much nicer if we could use a custom domain instead of github.com/<username>? Something that would make our imports look like below. For the sake of example we will be aliasing a made-up github.com/octocat/phantom package to octo.io/phantom (also made up).

import (
    "octo.io/phantom"
    // vs
    "github.com/octocat/phantom"
)

Well, I wouldn’t be writing this post if we couldn’t! Usage of custom domains to shorten the import paths like above is called Vanity URLs and is a very common thing to do (given you have a good short domain). Kubernetes this using this for one specific repo( instead of github.com/kubernetes they simply use k8s.io - 15 characters shorter on every import statement eases readability quite a lot.

Requirements

How any remote import works is explained in Go Docs - Cmd: Remote Import Paths.

For code hosted on other servers, import paths may either be qualified with the version control type, or the go tool can dynamically fetch the import path over https/http and discover where the code resides from a <meta> tag in the HTML.

Basically, To make Go tooling understand vanity urls (which are treated as any other remote import path), server must respond with a special <meta name="go-import" content="import-prefix vcs repo-root"> tag.

Setup

The way we will achieve this is by utilizing permalink feature of Jekyll and a custom layout specifically designed for our packages.

The way it will work is that we will be create a file per package we want to expose. I prefer to put defintion of my repositories in a separate folder, e.g go-imports.

Package definition
Create a new file in for the package go-imports/phantom.md. In the newly created file put the following Front Matter definition:

---
repo: phantom
---

This is all we need to define that our package repository is called phantom. We could also specify branch: v1 for v1 branch, later we will ensure master is the default branch

Site Config
In the site config _config.yml define the following:

domain: octo.io
github_username:  octocat

defaults:
  - scope:
      path: "go-imports"
    values:
      layout: go-imports
      permalink: /:basename
      branch: master

What the defaults above allow us to do is skip repetitiveness of settings in Front Matter for each package.

Layout definition
Create a new layout in _layouts, e.g _layouts/go-imports.html with following contents:

<!DOCTYPE html>{% assign package_name = page.name | remove: ".md" %}
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en-us">
<head>
  <meta http-equiv="content-type" content="text/html; charset=utf-8">

  <meta name="go-import" content="{{ site.domain }}/{{ package_name }} git https://github.com/{{ site.github_username }}/{{ page.repo }}">
  <meta name="go-source" content="{{ site.domain }}/{{ package_name }} _ https://github.com/{{ site.github_username }}/{{ page.repo }}/tree/{{ page.branch }}{/dir}
  https://github.com/{{ site.github_username }}/{{ page.repo }}/blob/{{ page.branch }}{/dir}/{file}#L{line}">

  <meta http-equiv="refresh" content="0; https://github.com/{{ site.github_username }}/{{ page.repo }}">

</head>
<body>
</body>
</html> 

A few things are going on here:

  • First, we parse the page name and remove the file extension (.md in our case), and assing result to package_name variable. This is technically not needed if you are fine setting package name in Front Matter - but I wanted to remove the repetitive operations.
  • Now we generate the go-import meta-tag. If you are not on github - it’s easy to change.
  • We also generate a go-source meta-tag that allows a jump to source from godoc and other clients.
  • Last, we add a page-refresh that will lead to the repo on GitHub if this page is opened via browser.

Test it After pushing these changes, you should be able to use curl to validate it: curl https://octo.io/phantom.

This should produce something like this:

<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en-us">
<head>
  <meta http-equiv="content-type" content="text/html; charset=utf-8">

  <meta name="go-import" content="octo.io/phantom git https://github.com/octocat/phantom">
  <meta name="go-source" content="octo.io/phantom _ https://github.com/octocat/phantom/tree/master{/dir}
  https://github.com/octocat/phantom/blob/master{/dir}/{file}#L{line}">

  <meta http-equiv="refresh" content="0; https://github.com/octocat/phantom">

</head>
<body>
</body>
</html>

You can also just try running go get octo.io/phantom.

Limitations

This setup does come with some limitations, and I’ll outline some of them here:

  • We have to manually create a file per package (luckily its very small)
  • As site is static - we cannot add meta tags only when ?go-get=1 is in the query string
  • We are using permalinks, which means you cannot have pages with the same name as your packages.

Conclusion

That’s it! Using GitHub Pages and static site generation with Jekyll is a quick and easy way to get vanity url support for your Go packages.

If you need more dynamic processing or more flexibility, you can easily write one yourself (in Go, naturally!) or use a readily available projects such as https://github.com/GoogleCloudPlatform/govanityurls/. But this option comes with requirement of hosting this app yourself.


And so it begins...

•••

I finally came around to register an .io domain. If you look at it, it’s quite impressive how long time it took me to do so. While I already have several other domains that I used for blogs and similar I never really was happy with the way it worked.

I’ve been itching to get something working on-top of static site generators and Github Pages was a good way to go, I believe.

So back to the domain. I wanted to get the alexejk.io specifically for using with Go vanity URLs. That is to use alexejk.io/pkgname import paths instead of a bit longer (and hard-tied to Github) github.com/alexejk/pkgname. Don’t get me wrong, I really like Github and what they have done lately, but still - it’s nice to be able to move your source around without breaking any imports for people if such need arises.

P.S: I’ll post later how I’ve done the vanity URL support on-top of GitHub Pages.