Goroutines in Go: A Practical Guide to Concurrency

New
9 min read
Deven J.
Deven J.
Published March 7, 2025

Let’s face it: modern apps often need to do a million things at once. If your app isn’t set up for that, it can feel clunky and slow. That’s why concurrency is so important. Go (often called Golang) has concurrency built-in, so you can write faster, more flexible software without the usual hassle of managing threads.

In this guide, we’ll start by reviewing how programs usually execute tasks sequentially and why that can lead to bottlenecks. Then we’ll explore Go’s approach to concurrency through goroutines and channels, compare concurrency to parallelism, and look at practical examples—including a small, real-world use case. If you know basic Go syntax but want to dive deeper into goroutines, this post is for you.

Table of Contents

  1. Traditional Program Execution: A quick refresher on line-by-line (blocking) execution.
  2. Why Concurrency Matters: The practical problems concurrency solves.
  3. Concurrency vs. Parallelism: Understanding these two often-confused concepts.
  4. Concurrency in Go with Goroutines: Creating lightweight concurrent functions with the go keyword.
  5. Communicating Between Goroutines - Channels: Passing data safely between goroutines.
  6. Avoiding Race Conditions: Using channels or sync primitives to keep your code safe.
  7. Real-World Example: A simple concurrent webserver to illustrate concurrency in action.
  8. Best Practices: Tips for effectively using goroutines in production.
  9. Conclusion: A summary of how Go’s concurrency features help you write efficient, maintainable applications.

Traditional Program Execution: Line by Line

Most traditional programs execute instructions one after another, blocking subsequent instructions until the current one finishes. This approach is straightforward but can become inefficient when tasks take a long time.

Example in C

c
1
2
3
4
5
6
7
8
#include <stdio.h> int main() { printf("Step 1: Fetch user data\n"); printf("Step 2: Process user data\n"); printf("Step 3: Save user data\n"); return 0; }

Expected output

Step 1: Fetch user data
Step 2: Process user data
Step 3: Save user data

This sequential process ensures clarity and simplicity. However, if fetching user data took several seconds, the entire program would stall, waiting for that operation to finish. That can become a bottleneck in systems that need to handle multiple users, processes, or data sources simultaneously.

Why Concurrency Matters

Imagine you’re building a web service that needs to handle multiple incoming requests at once. If your server processes them individually, each request might have to wait in line—even if other tasks or CPU cores are available to do work. Concurrency allows your program to switch between tasks and utilize waiting time (e.g., I/O wait) more efficiently, keeping the application responsive and fast.

Concurrency vs. Parallelism

Before we look at Go’s concurrency features, let’s clarify concurrency vs. parallelism:

  • Concurrency: Having multiple tasks in progress simultaneously, potentially interleaving their execution on a single CPU core.
  • Parallelism: Executing tasks simultaneously on different CPU cores.

In Go, goroutines let you structure your program for concurrency easily. If you run your Go program on a multi-core system with default settings (or by using runtime.GOMAXPROCS), Go can schedule these goroutines in parallel across available CPU cores. But even on a single core, concurrency ensures your tasks don’t block each other unnecessarily.

Concurrency in Go with Goroutines

Go introduces goroutines, which are lightweight functions that can run concurrently. You launch them using the go keyword. When you place go before a function call, Go schedules that function to run as a separate goroutine.

Example: Basic Goroutines

go
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
package main import ( "fmt" "time" ) func task(name string) { for i := 1; i <= 3; i++ { fmt.Println(name, "running", i) time.Sleep(time.Millisecond * 500) } } func main() { // The 'go' keyword launches a new goroutine go task("Task 1") go task("Task 2") // Sleep to ensure the main function doesn't exit before goroutines finish time.Sleep(time.Second * 2) fmt.Println("Main function completed") }

What is the go keyword?

When you prepend go to a function call, it tells the Go runtime to execute that function concurrently. The Go runtime manages these goroutines on top of operating system threads, making them much more lightweight than traditional OS threads.

If you don’t prevent the main function from exiting (for example, by using time.Sleep or more robust solutions like sync.WaitGroup), your program might end before the goroutines finish.

Expected Output

Task 1 running 1
Task 2 running 1
Task 1 running 2
Task 2 running 2
Task 1 running 3
Task 2 running 3
Main function completed

Notice how the output of “Task 1” and “Task 2” is interleaved, indicating concurrency.

Synchronizing Goroutines Without time.Sleep()

While time.Sleep can demonstrate concurrency for a quick example, it isn’t a robust synchronization approach. A more idiomatic solution is using a sync.WaitGroup.

A WaitGroup is a way to wait for a collection of goroutines to finish. It provides three essential methods:

  1. Add(delta int): Increments the internal counter by delta. Typically, you call wg.Add(n) if you know you’ll start n new goroutines.
  2. Done(): Decrements the internal counter by 1. You typically call defer wg.Done() at the start of each goroutine that needs to be tracked.
  3. Wait(): Blocks until the internal counter becomes zero. This means all goroutines that were added have signaled they are done.

Example: Using a WaitGroup Instead of Sleep

go
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
package main import ( "fmt" "sync" ) func task(name string, wg *sync.WaitGroup) { defer wg.Done() fmt.Println(name, "completed") } func main() { var wg sync.WaitGroup wg.Add(2) go task("Task 1", &wg) go task("Task 2", &wg) // This will block until both tasks call wg.Done() wg.Wait() fmt.Println("All tasks completed") }

Here, WaitGroup precisely coordinates your goroutines. You increment with wg.Add(2) because two goroutines will run, and each goroutine calls defer wg.Done() when finished.

Communicating Between Goroutines: Channels

One of the coolest parts of Go’s concurrency model is its focus on communication. Rather than having multiple goroutines directly read and write to the same memory, Go encourages you to share memory by communicating.

Unbuffered Channels

An unbuffered channel is created like this:

go
1
ch := make(chan int)

When you send a value into an unbuffered channel (using <-), the sending goroutine blocks until another goroutine receives the value.

Ready to integrate? Our team is standing by to help you. Contact us today and launch tomorrow!
go
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
package main import ( "fmt" "time" ) func sendData(ch chan<- int, data int) { fmt.Println("Sending", data) ch <- data // Block until received fmt.Println("Finished sending", data) } func receiveData(ch <-chan int) { val := <-ch // Block until value is sent fmt.Println("Received", val) } func main() { ch := make(chan int) go sendData(ch, 10) go receiveData(ch) time.Sleep(time.Second) fmt.Println("Done") }

Expected Output

Sending 10
Received 10
Finished sending 10
Done

Notice how sending and receiving line up so they don’t overwrite or conflict.

Buffered Channels

A buffered channel has a capacity, meaning you can send multiple values before it blocks:

go
1
ch := make(chan int, 3)

Once this channel holds three values, any further sends will block until a receiver consumes something.

Avoiding Race Conditions

A race condition occurs when multiple goroutines access shared data at the same time, and at least one modifies it. This can lead to unpredictable outcomes.

Using Channels to Avoid Shared Data

Go’s idiomatic approach is not to share memory directly but to pass data through channels. That way, only one goroutine accesses a piece of data at a time.

Using Mutexes

When you do need to share data structures, you can use a mutex:

go
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
package main import ( "fmt" "sync" ) type safeCounter struct { mu sync.Mutex count int } func (sc *safeCounter) increment() { sc.mu.Lock() sc.count++ sc.mu.Unlock() } func main() { sc := &safeCounter{} var wg sync.WaitGroup wg.Add(2) go func() { defer wg.Done() for i := 0; i < 1000; i++ { sc.increment() } }() go func() { defer wg.Done() for i := 0; i < 1000; i++ { sc.increment() } }() wg.Wait() fmt.Println("Final count:", sc.count) }

Here, sync.Mutex ensures only one goroutine can increment count at a time, preventing data races.

Checking for Races

Go provides a built-in race detector. Run your app with:

go run -race main.go

or

go test -race

It will instrument your code and warn you about potential race conditions. Using this whenever you’re writing or modifying concurrent code is highly recommended.

Common Concurrency Patterns

While the examples above show basic usage, Go developers frequently use patterns like Fan-Out/Fan-In and Worker Pools for more complex tasks:

  1. Fan-Out/Fan-In
    • Fan-Out: Start multiple goroutines to process parts of a job in parallel.
    • Fan-In: Collect the results from those goroutines back into a single channel or data structure.
  2. Worker Pools
    • Create a fixed number of workers (goroutines) that read tasks from a channel.
    • This approach prevents spawning a huge number of goroutines if tasks spike.

Short Example (Worker Pool):

go
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
package main import ( "fmt" "sync" ) func worker(id int, tasks <-chan int, results chan<- int, wg *sync.WaitGroup) { defer wg.Done() for t := range tasks { // Do some work, e.g., multiply by 2 results <- t * 2 } } func main() { tasks := make(chan int, 10) results := make(chan int, 10) var wg sync.WaitGroup // Create 3 workers for i := 1; i <= 3; i++ { wg.Add(1) go worker(i, tasks, results, &wg) } // Send tasks for i := 1; i <= 5; i++ { tasks <- i } close(tasks) // Wait for all workers to finish go func() { wg.Wait() close(results) }() // Collect results for r := range results { fmt.Println("Result:", r) } }

Real-World Example: A Concurrent Web Server

Below is a simple real-world concurrency example. Suppose we have an HTTP server that calculates the factorial of a number. Handling this synchronously for each request could block the server if calculations take a long time. Instead, we can launch a goroutine for each request to prevent blocking.

go
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
package main import ( "fmt" "log" "net/http" "strconv" "sync" ) // factorial computes factorial of n in a naive way. func factorial(n int) int { if n <= 1 { return 1 } return n * factorial(n-1) } func main() { var wg sync.WaitGroup http.HandleFunc("/factorial", func(w http.ResponseWriter, r *http.Request) { // Parse query param, e.g., /factorial?n=5 nStr := r.URL.Query().Get("n") n, err := strconv.Atoi(nStr) if err != nil { http.Error(w, "Invalid number", http.StatusBadRequest) return } // Increment WaitGroup for each request wg.Add(1) go func(num int) { defer wg.Done() result := factorial(num) // Write the response (this part is safe if you only write from one goroutine at a time) fmt.Fprintf(w, "Factorial(%d) = %d\n", num, result) }(n) }) log.Println("Server starting at :8080") // Start the server (blocking call) log.Fatal(http.ListenAndServe(":8080", nil)) // In a real-world application, you'd handle a graceful shutdown signal // and do wg.Wait() before fully exiting. }

Why This Is Useful

  • Multiple requests hitting /factorial can be processed without blocking each other.
  • The sync.WaitGroup could be used to ensure we manage goroutines properly if we ever implement a graceful shutdown process.
  • This design can easily scale, illustrating how concurrency in Go addresses real-world demands.

Best Practices for Using Goroutines

  1. Use Synchronization Tools (WaitGroups, Channels, Contexts): Avoid using time.Sleep() to keep goroutines alive. Instead, rely on sync.WaitGroup or channels.
  2. Limit the Number of Goroutines: Even though they’re lightweight, spawning tens of thousands unnecessarily can stress the runtime.
  3. Use Buffered Channels for Rate Limiting: If multiple tasks write to a shared resource, buffering can help you control the pace and avoid immediate blocking.
  4. Prevent Goroutine Leaks: Use context.WithCancel() or other signaling methods to stop goroutines that are no longer needed.
  5. Watch Out for Shared Data: If you must share data, use Go’s sync primitives (sync.Mutex, sync.RWMutex) or communicate via channels to avoid race conditions.
  6. Check for Races in Development: Use Go’s built-in race detector (-race) during development and testing to catch hidden race conditions early.

Conclusion

Concurrency matters because it lets your applications handle multiple tasks at once, keeping everything fast and responsive. In Go, goroutines make it easy to write concurrent programs without the usual headaches of managing OS threads. By pairing goroutines with channels, you can pass data around cleanly and avoid many common pitfalls—such as race conditions—plaguing concurrent programming in other languages.

Beyond the basics, you can scale up your applications with worker pools, fan-out/fan-in patterns, and advanced synchronization techniques. While concurrency can solve many performance problems, it’s not a silver bullet: you must think carefully about synchronization, resource usage, and graceful termination. Tools like sync.WaitGroup, sync.Mutex, and Go’s -race detector help you build safe, robust systems. Stay tuned for further posts that will explore these tools further. Happy coding!

Further Reading


At Stream, we’re pushing Go to its limits to power real-time experiences for billions of end-users. Ready to shape the future of real-time APIs? Apply here!

Ready to Increase App Engagement?
Integrate Stream’s real-time communication components today and watch your engagement rate grow overnight!
Contact Us Today!