New 42-day free trial Get it now
Smarty

Scanning CSV in Go

Smarty header pin graphic
Updated October 29, 2025
Tags
Smarty header pin graphic

For the purpose of this article, consider the following CSV data, slightly modified from the docs for encoding/csv:

csvData := strings.NewReader(strings.Join([]string{
	`first_name,last_name,username`,
	`"Rob","Pike",rob`,
	`Ken,Thompson,ken`,
	`"Robert","Griesemer","gri"`,
}, "\n"))

Here's how you read the data, line by line, using the Reader provided in that package:

reader := csv.NewReader(csvData)

for {
	record, err := reader.Read()
	if err == io.EOF {
		break
	}
	if err != nil {
		// handle the error...
		// break? continue? neither?
	}

	fmt.Println(record)
}

// Output:
// [first_name last_name username]
// [Rob Pike rob]
// [Ken Thompson ken]
// [Robert Griesemer gri]

There are a few awkward elements to this approach:

  1. We are checking for io.EOF each time around the loop.
  2. We are checking for a non-nil error each time around the loop.
  3. It's not clear what kind of non-nil errors might appear and what kind of handling logic the programmer should use in each case.

Generally, I expect CSV files to be well-formed and I break out of the read loop at the first sign of trouble. If that's also the approach you generally use, well, we've got an even more elegant way to read CSV data!

https://pkg.go.dev/github.com/smartystreets/scanners/csv

scanner := csv.NewScanner(csvData)

for scanner.Scan() {
	fmt.Println(scanner.Record())
}

if err := scanner.Error(); err != nil {
	log.Panic(err)
}

// Output:
// [first_name last_name username]
// [Rob Pike rob]
// [Ken Thompson ken]
// [Robert Griesemer gri]

This will look very familiar if you've ever used io/bufio.Scanner. No more cumbersome checks for io.EOF or errors in the body of the loop! By default, scanner.Scan() returns false at the first sign of an error from the underlying encoding/csv.Reader. So, how do you customize the behavior of the scanner you ask? What if the CSV data makes use of another character for the separater/delimiter/comma? Observe the use of variadic, functional configuration options accepted by csv.NewScanner:

csvDataCustom := strings.Join([]string{
	`first_name;last_name;username`, // ';' is the delimiter!
	`"Rob";"Pike";rob`,
	`# lines beginning with a # character are ignored`, // '#' is the comment character!
	`Ken;Thompson;ken`,
	`"Robert";"Griesemer";"gri"`,
}, "\n")

scanner := csv.NewScanner(csvDataCustom, 
	csv.Comma(';'), csv.Comment('#'), csv.ContinueOnError(true))

for scanner.Scan() {
	if err := scanner.Error(); err != nil {
		log.Panic(err)
	} else {
		fmt.Println(scanner.Record())
	}
}

// Output:
// [first_name last_name username]
// [Rob Pike rob]
// [Ken Thompson ken]
// [Robert Griesemer gri]

Pretty flexible, right? And notice, we still don't have to detect io.EOF, that happens internally and results in scanner.Scan() returning false.

Now, what if you are scanning the rows into struct values that have fields that mirror the CSV schema? Suppose we have a Contact type that mirrors our CSV schema...what's a nice way to encapsulate the translation from a CSV record to a Contact? Embed a *csv.Scanner in a ContactScanner and override the Record method to return an instance of the Contact struct rather than the []string record!

package main

import (
	"fmt"
	"io"
	"log"
	"strings"

	"github.com/smartystreets/scanners/csv"
)

type Contact struct {
	FirstName string
	LastName  string
	Username  string
}

type ContactScanner struct{ *csv.Scanner }

func NewContactScanner(reader io.Reader) *ContactScanner {
	inner := csv.NewScanner(reader)
	inner.Scan() // skip the header!
	return &ContactScanner{Scanner: inner}
}

func (this *ContactScanner) Record() Contact {
	fields := this.Scanner.Record()
	return Contact{
		FirstName: fields[0],
		LastName:  fields[1],
		Username:  fields[2],
	}
}

func main() {
	csvData := strings.NewReader(strings.Join([]string{
		`first_name,last_name,username`,
		`"Rob","Pike",rob`,
		`Ken,Thompson,ken`,
		`"Robert","Griesemer","gri"`,
	}, "\n"))

	scanner := NewContactScanner(csvData)

	for scanner.Scan() {
		fmt.Printf("%#v\n", scanner.Record())
	}

	if err := scanner.Error(); err != nil {
		log.Panic(err)
	}

	// Output:
	// main.Contact{FirstName:"Rob", LastName:"Pike", Username:"rob"}
	// main.Contact{FirstName:"Ken", LastName:"Thompson", Username:"ken"}
	// main.Contact{FirstName:"Robert", LastName:"Griesemer", Username:"gri"}
}

But we can go even further if you're not averse to using struct tags and reflection. Notice below that the StructScanner is able to populate a pointer to a struct whose fields are decorated with CSV struct tags corresponding with the header column names:

package main

import (
	"fmt"
	"log"
	"strings"

	"github.com/smartystreets/scanners/csv"
)

type Contact struct {
	FirstName string `csv:"first_name"`
	LastName  string `csv:"last_name"`
	Username  string `csv:"username"`
}

func main() {
	csvData := strings.NewReader(strings.Join([]string{
		`first_name,last_name,username`,
		`"Rob","Pike",rob`,
		`Ken,Thompson,ken`,
		`"Robert","Griesemer","gri"`,
	}, "\n"))

	scanner, err := csv.NewStructScanner(csvData)
	if err != nil {
		log.Panic(err)
	}

	for scanner.Scan() {
		var contact Contact
		if err := scanner.Populate(&contact); err != nil {
			log.Panic(err)
		}
		fmt.Printf("%#v\n", contact)
	}

	if err := scanner.Error(); err != nil {
		log.Panic(err)
	}

	// Output:
	// main.Contact{FirstName:"Rob", LastName:"Pike", Username:"rob"}
	// main.Contact{FirstName:"Ken", LastName:"Thompson", Username:"ken"}
	// main.Contact{FirstName:"Robert", LastName:"Griesemer", Username:"gri"}
}

Clearly, there are many ways to read a CSV file (including other nicely written packages). Happy (CSV) scanning!

go get -u github.com/smartystreets/scanners/csv

Source Code

Subscribe to our blog!
Learn more about RSS feeds here.
Read our recent posts
Pinpoint 2025: Day 1 recap
Arrow Icon
For two days, Smarty users gathered together with other address data experts for Pinpoint, Smarty’s first virtual user conference, where developers, industry experts, and product specialists talked all things addresses and pulled back the curtain on address data solutions. Attendees asked questions about cloud-based software, the impact of accurate address data on fintech and insurance companies, and the ROI of good address data across all industries. Plus, attendees got to take a peek behind the scenes and see what makes Smarty’s address data solutions tick.
Pinpoint 2025: Day 2 recap
Arrow Icon
For two days, Smarty gathered address data experts for Pinpoint, our first-ever virtual user conference. There, developers, product specialists, and industry experts delved into the nitty-gritty details of address data that you need to understand in order to succeed in your industry. Attendees learned how to process addresses faster (much faster) than the blink of an eye, with and without using code! They also got to look under the hood on how insurance is evolving and becoming even more efficient than ever to create a “delightful user experience.
What to look for in an address data solution: Ease of implementation
Arrow Icon
You’ve maybe chosen an address data provider (or maybe you’ve just recently fallen out of love with the one you’ve got). Now comes the time to really test what matters: implementation. Even the best solution falls short if it’s hard to integrate, confusing to use, or impossible to maintain. This blog is part of our five-part series, What to look for in an address data solution. Previously, we discussed why human support teams should be top of mind when choosing an address data solution. Not just tech support, but educational materials, help getting started, and more.

Ready to get started?