Let’s face it: modern apps often need to do a million things at once. If your app isn’t set up for that, it can feel clunky and slow. That’s why concurrency is so important. Go (often called Golang) has concurrency built-in, so you can write faster, more flexible software without the usual hassle of managing threads.
In this guide, we’ll start by reviewing how programs usually execute tasks sequentially and why that can lead to bottlenecks. Then we’ll explore Go’s approach to concurrency through goroutines and channels, compare concurrency to parallelism, and look at practical examples—including a small, real-world use case. If you know basic Go syntax but want to dive deeper into goroutines, this post is for you.
Table of Contents
- Traditional Program Execution: A quick refresher on line-by-line (blocking) execution.
- Why Concurrency Matters: The practical problems concurrency solves.
- Concurrency vs. Parallelism: Understanding these two often-confused concepts.
- Concurrency in Go with Goroutines: Creating lightweight concurrent functions with the go keyword.
- Communicating Between Goroutines - Channels: Passing data safely between goroutines.
- Avoiding Race Conditions: Using channels or sync primitives to keep your code safe.
- Real-World Example: A simple concurrent webserver to illustrate concurrency in action.
- Best Practices: Tips for effectively using goroutines in production.
- Conclusion: A summary of how Go’s concurrency features help you write efficient, maintainable applications.
Traditional Program Execution: Line by Line
Most traditional programs execute instructions one after another, blocking subsequent instructions until the current one finishes. This approach is straightforward but can become inefficient when tasks take a long time.
Example in C
12345678#include <stdio.h> int main() { printf("Step 1: Fetch user data\n"); printf("Step 2: Process user data\n"); printf("Step 3: Save user data\n"); return 0; }
Expected output
Step 1: Fetch user data
Step 2: Process user data
Step 3: Save user data
This sequential process ensures clarity and simplicity. However, if fetching user data took several seconds, the entire program would stall, waiting for that operation to finish. That can become a bottleneck in systems that need to handle multiple users, processes, or data sources simultaneously.
Why Concurrency Matters
Imagine you’re building a web service that needs to handle multiple incoming requests at once. If your server processes them individually, each request might have to wait in line—even if other tasks or CPU cores are available to do work. Concurrency allows your program to switch between tasks and utilize waiting time (e.g., I/O wait) more efficiently, keeping the application responsive and fast.
Concurrency vs. Parallelism
Before we look at Go’s concurrency features, let’s clarify concurrency vs. parallelism:
- Concurrency: Having multiple tasks in progress simultaneously, potentially interleaving their execution on a single CPU core.
- Parallelism: Executing tasks simultaneously on different CPU cores.
In Go, goroutines let you structure your program for concurrency easily. If you run your Go program on a multi-core system with default settings (or by using runtime.GOMAXPROCS
), Go can schedule these goroutines in parallel across available CPU cores. But even on a single core, concurrency ensures your tasks don’t block each other unnecessarily.
Concurrency in Go with Goroutines
Go introduces goroutines, which are lightweight functions that can run concurrently. You launch them using the go
keyword. When you place go
before a function call, Go schedules that function to run as a separate goroutine.
Example: Basic Goroutines
1234567891011121314151617181920212223package main import ( "fmt" "time" ) func task(name string) { for i := 1; i <= 3; i++ { fmt.Println(name, "running", i) time.Sleep(time.Millisecond * 500) } } func main() { // The 'go' keyword launches a new goroutine go task("Task 1") go task("Task 2") // Sleep to ensure the main function doesn't exit before goroutines finish time.Sleep(time.Second * 2) fmt.Println("Main function completed") }
What is the go
keyword?
When you prepend go
to a function call, it tells the Go runtime to execute that function concurrently. The Go runtime manages these goroutines on top of operating system threads, making them much more lightweight than traditional OS threads.
If you don’t prevent the main
function from exiting (for example, by using time.Sleep
or more robust solutions like sync.WaitGroup
), your program might end before the goroutines finish.
Expected Output
Task 1 running 1
Task 2 running 1
Task 1 running 2
Task 2 running 2
Task 1 running 3
Task 2 running 3
Main function completed
Notice how the output of “Task 1” and “Task 2” is interleaved, indicating concurrency.
Synchronizing Goroutines Without time.Sleep()
While time.Sleep
can demonstrate concurrency for a quick example, it isn’t a robust synchronization approach. A more idiomatic solution is using a sync.WaitGroup
.
A WaitGroup
is a way to wait for a collection of goroutines to finish. It provides three essential methods:
- Add(delta int): Increments the internal counter by
delta
. Typically, you callwg.Add(n)
if you know you’ll start n new goroutines. - Done(): Decrements the internal counter by 1. You typically call
defer wg.Done()
at the start of each goroutine that needs to be tracked. - Wait(): Blocks until the internal counter becomes zero. This means all goroutines that were added have signaled they are done.
Example: Using a WaitGroup Instead of Sleep
1234567891011121314151617181920212223package main import ( "fmt" "sync" ) func task(name string, wg *sync.WaitGroup) { defer wg.Done() fmt.Println(name, "completed") } func main() { var wg sync.WaitGroup wg.Add(2) go task("Task 1", &wg) go task("Task 2", &wg) // This will block until both tasks call wg.Done() wg.Wait() fmt.Println("All tasks completed") }
Here, WaitGroup
precisely coordinates your goroutines. You increment with wg.Add(2)
because two goroutines will run, and each goroutine calls defer wg.Done()
when finished.
Communicating Between Goroutines: Channels
One of the coolest parts of Go’s concurrency model is its focus on communication. Rather than having multiple goroutines directly read and write to the same memory, Go encourages you to share memory by communicating.
Unbuffered Channels
An unbuffered channel is created like this:
1ch := make(chan int)
When you send a value into an unbuffered channel (using <-
), the sending goroutine blocks until another goroutine receives the value.
123456789101112131415161718192021222324252627package main import ( "fmt" "time" ) func sendData(ch chan<- int, data int) { fmt.Println("Sending", data) ch <- data // Block until received fmt.Println("Finished sending", data) } func receiveData(ch <-chan int) { val := <-ch // Block until value is sent fmt.Println("Received", val) } func main() { ch := make(chan int) go sendData(ch, 10) go receiveData(ch) time.Sleep(time.Second) fmt.Println("Done") }
Expected Output
Sending 10
Received 10
Finished sending 10
Done
Notice how sending and receiving line up so they don’t overwrite or conflict.
Buffered Channels
A buffered channel has a capacity, meaning you can send multiple values before it blocks:
1ch := make(chan int, 3)
Once this channel holds three values, any further sends will block until a receiver consumes something.
Avoiding Race Conditions
A race condition occurs when multiple goroutines access shared data at the same time, and at least one modifies it. This can lead to unpredictable outcomes.
Using Channels to Avoid Shared Data
Go’s idiomatic approach is not to share memory directly but to pass data through channels. That way, only one goroutine accesses a piece of data at a time.
Using Mutexes
When you do need to share data structures, you can use a mutex:
12345678910111213141516171819202122232425262728293031323334353637383940package main import ( "fmt" "sync" ) type safeCounter struct { mu sync.Mutex count int } func (sc *safeCounter) increment() { sc.mu.Lock() sc.count++ sc.mu.Unlock() } func main() { sc := &safeCounter{} var wg sync.WaitGroup wg.Add(2) go func() { defer wg.Done() for i := 0; i < 1000; i++ { sc.increment() } }() go func() { defer wg.Done() for i := 0; i < 1000; i++ { sc.increment() } }() wg.Wait() fmt.Println("Final count:", sc.count) }
Here, sync.Mutex
ensures only one goroutine can increment count
at a time, preventing data races.
Checking for Races
Go provides a built-in race detector. Run your app with:
go run -race main.go
or
go test -race
It will instrument your code and warn you about potential race conditions. Using this whenever you’re writing or modifying concurrent code is highly recommended.
Common Concurrency Patterns
While the examples above show basic usage, Go developers frequently use patterns like Fan-Out/Fan-In and Worker Pools for more complex tasks:
- Fan-Out/Fan-In
- Fan-Out: Start multiple goroutines to process parts of a job in parallel.
- Fan-In: Collect the results from those goroutines back into a single channel or data structure.
- Worker Pools
- Create a fixed number of workers (goroutines) that read tasks from a channel.
- This approach prevents spawning a huge number of goroutines if tasks spike.
Short Example (Worker Pool):
12345678910111213141516171819202122232425262728293031323334353637383940414243package main import ( "fmt" "sync" ) func worker(id int, tasks <-chan int, results chan<- int, wg *sync.WaitGroup) { defer wg.Done() for t := range tasks { // Do some work, e.g., multiply by 2 results <- t * 2 } } func main() { tasks := make(chan int, 10) results := make(chan int, 10) var wg sync.WaitGroup // Create 3 workers for i := 1; i <= 3; i++ { wg.Add(1) go worker(i, tasks, results, &wg) } // Send tasks for i := 1; i <= 5; i++ { tasks <- i } close(tasks) // Wait for all workers to finish go func() { wg.Wait() close(results) }() // Collect results for r := range results { fmt.Println("Result:", r) } }
Real-World Example: A Concurrent Web Server
Below is a simple real-world concurrency example. Suppose we have an HTTP server that calculates the factorial of a number. Handling this synchronously for each request could block the server if calculations take a long time. Instead, we can launch a goroutine for each request to prevent blocking.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647package main import ( "fmt" "log" "net/http" "strconv" "sync" ) // factorial computes factorial of n in a naive way. func factorial(n int) int { if n <= 1 { return 1 } return n * factorial(n-1) } func main() { var wg sync.WaitGroup http.HandleFunc("/factorial", func(w http.ResponseWriter, r *http.Request) { // Parse query param, e.g., /factorial?n=5 nStr := r.URL.Query().Get("n") n, err := strconv.Atoi(nStr) if err != nil { http.Error(w, "Invalid number", http.StatusBadRequest) return } // Increment WaitGroup for each request wg.Add(1) go func(num int) { defer wg.Done() result := factorial(num) // Write the response (this part is safe if you only write from one goroutine at a time) fmt.Fprintf(w, "Factorial(%d) = %d\n", num, result) }(n) }) log.Println("Server starting at :8080") // Start the server (blocking call) log.Fatal(http.ListenAndServe(":8080", nil)) // In a real-world application, you'd handle a graceful shutdown signal // and do wg.Wait() before fully exiting. }
Why This Is Useful
- Multiple requests hitting
/factorial
can be processed without blocking each other. - The
sync.WaitGroup
could be used to ensure we manage goroutines properly if we ever implement a graceful shutdown process. - This design can easily scale, illustrating how concurrency in Go addresses real-world demands.
Best Practices for Using Goroutines
- Use Synchronization Tools (WaitGroups, Channels, Contexts): Avoid using
time.Sleep()
to keep goroutines alive. Instead, rely onsync.WaitGroup
or channels. - Limit the Number of Goroutines: Even though they’re lightweight, spawning tens of thousands unnecessarily can stress the runtime.
- Use Buffered Channels for Rate Limiting: If multiple tasks write to a shared resource, buffering can help you control the pace and avoid immediate blocking.
- Prevent Goroutine Leaks: Use
context.WithCancel()
or other signaling methods to stop goroutines that are no longer needed. - Watch Out for Shared Data: If you must share data, use Go’s sync primitives (
sync.Mutex
,sync.RWMutex
) or communicate via channels to avoid race conditions. - Check for Races in Development: Use Go’s built-in race detector (
-race
) during development and testing to catch hidden race conditions early.
Conclusion
Concurrency matters because it lets your applications handle multiple tasks at once, keeping everything fast and responsive. In Go, goroutines make it easy to write concurrent programs without the usual headaches of managing OS threads. By pairing goroutines with channels, you can pass data around cleanly and avoid many common pitfalls—such as race conditions—plaguing concurrent programming in other languages.
Beyond the basics, you can scale up your applications with worker pools, fan-out/fan-in patterns, and advanced synchronization techniques. While concurrency can solve many performance problems, it’s not a silver bullet: you must think carefully about synchronization, resource usage, and graceful termination. Tools like sync.WaitGroup
, sync.Mutex
, and Go’s -race
detector help you build safe, robust systems. Stay tuned for further posts that will explore these tools further. Happy coding!