# How to Concatenate Strings in Go


๐Ÿ—“ July 27, 2018 | ๐Ÿ‘ฑ By: Hugh



I've seen a lot of people ask how do I concatenate strings in go? Reasonable question, and as always there are many potential answers, each with their pros and cons. In cases where efficiency isnโ€™t a concern, then just go for the most readable option. Simply using the + operator or Sprintf could be all you need.

An example using +:

bigExclamation := "!!!"
fmt.Println("Hello World" + bigExclamation)

(Playground)

And an example using fmt.Sprintf:

packageName := "fmt"
function := "Sprintf()"
output := fmt.Sprintf("%s.%s", packageName, function)
fmt.Println(output)

(Playground)

But what if we want to concatenate lots of arbitrary length strings? There is a bit of discussion in this post on Stack Overflow, but some of it is a bit misleading so I want to write about my take. I look at it as a sliding scale where on the one end you have safety, and at the other end you have speed. Both should really be considered before selecting an option. At one end of the spectrum there is strings.Builder which efficiently manages growing the memory allocated, allows appending strings, bytes and runes, and contains protection against unsafe usage. At the other end is allocating a byte slice, then just copying the strings directly into it.



So why is a byte slice the fastest? Because it is the simplest. You allocate a chunk of memory, then copy the data in. If, for example, we're trying to concatenate n strings returned by the function getRandomString(), we could, naively, do something like this:

//Assume getRandomString() returns strings of the same length.
buf := make([]byte, n * len(getRandomString()))
count := 0
for i := 0; i < n; i++ {
    count += copy(buf[count:], getRandomString())
}

The copy function returns the number of bytes copied, allowing you to track where in the slice to write the next chunk.

But what if the length of the random string changes? If it is consistently shorter then that's not a big deal. If it is consistently longer though, then the slice will fill up and you'll have problems. You won't have a crash, copy will just return 0 and no bytes will be copied.

So what can you do? You could check that copy is returning a value equal to the length of the random string being copied. Then if you realise that the buffer is full, allocate a new buffer and copy the data into it. How big should the new buffer be though? Add 50%? Double it? And what if only half of the string was copied, now you have to copy the rest in after you allocate it. The alternative is to trust the Go people, and just use append that takes care of all this for you. Still allocate an estimate of the memory you'll need, but set the length to 0.

buf := make([]byte, 0, n * len(getRandomString()))
for i := 0; i < n; i++ {
    buf = append(buf, getRandomString()...)
}

In theory this will be fractionally slower due to the need to update the slice meta-data (e.g. length), but it is safer and will handle growing for you if required.

The last option I'm going to consider here is the strings.Builder which is the safest, arguably clearest, but slowest. One way to boost the speed is to use the Grow method to pre-allocate memory, much like you'd do for the other two options.

var b strings.Builder
b.Grow(n * len(getRandomString()))
for i := 0; i < n; i++ {
    b.WriteString(getRandomString())
}
str := b.String()

It's pretty easy to read, grows as required, supports adding runes directly if required, and has some other protections like preventing copying the builder unsafely.

So how do they perform? Well, my benchmarks are quite surprising:

BenchmarkAppend-4    2000    730761 ns/op
BenchmarkCopy-4      1000   1068218 ns/op
BenchmarkBuilder-4   1000   1683775 ns/op

For some reason append is performing slightly better than copy. This could come down to an optimisation being applied in the compiler, or more likely an error in my code. Feel free to run it yourself and correct my mistakes, I've pasted the code below.



I'll come back to that later, maybe write a post about my findings. In the mean time, it's worth noting the slowest of the 3 takes 1.7ms to join 100,000 strings, be aware of that before trying to optimise your code to use the fastest string concatenation method. If the getRandomString() function takes any more than ~17ns then that will be the actual bottleneck since fetching all those strings will take as much time as the joins (100,000 x 17ns = 1.7ms). As a demo, here is the same set of benchmarks with a 100ns (0.1us) delay in it. I tried using 17ns but time.Sleep doesn't seem to be able to handle such a small sleep.

BenchmarkAppend-4   50 41298048 ns/op
BenchmarkCopy-4     30 41878881 ns/op
BenchmarkBuilder-4  50 40713717 ns/op

The minimum possible sleep seems to be bigger than 100ns too, but as you can see, with a small delay the 3 concatenation methods all perform pretty much equally. Moral of the story is, make sure the actual slow bits of your code are optimised. If fetching the list of strings takes 500ms, then joining them in 0.7ms instead of 1.7ms is not going to make a whole lot of difference.

main.go

package main
import (
	"strings"
)
func getRandomString() string {
	// Try adding a 1ns sleep here. Don't forget to import "time".
	// time.Sleep(100 * time.Nanosecond)
	return "0123456789012345"
}
func concatCopy(n int) string {
	buf := make([]byte, n*len(getRandomString()))
	count := 0
	for i := 0; i < n; i++ {
		count += copy(buf[count:], getRandomString())
	}
	return string(buf)
}
func concatAppend(n int) string {
	buf := make([]byte, 0, n*len(getRandomString()))
	for i := 0; i < n; i++ {
		buf = append(buf, getRandomString()...)
	}
	return string(buf)
}
func concatBuilderPreGrow(n int) string {
	var b strings.Builder
	b.Grow(n * len(getRandomString()))
	for i := 0; i < n; i++ {
		b.WriteString(getRandomString())
	}
	return b.String()
}
func main() {
}

main_test.go

package main
import "testing"
func BenchmarkAppend(b *testing.B) {
	for n := 0; n < b.N; n++ {
		concatAppend(100000)
	}
}
func BenchmarkCopy(b *testing.B) {
	for n := 0; n < b.N; n++ {
		concatCopy(100000)
	}
}
func BenchmarkBuilder(b *testing.B) {
	for n := 0; n < b.N; n++ {
		concatBuilderPreGrow(100000)
	}
}