talks/2011: copy in the lexical scanning talk from its old location
It resides in my code.google.com personal repository, but that
will go away and anyway it belongs here, a place that did not
exist when the talk was given.
The only edits are formatting for the present format, an added
line about Go 1 allowing goroutines in init, and a file rename
to avoid a git gofmt precommit.
Change-Id: I4c2f2c2d44fef9cb574ef4f7223d9cb0c7b15060
Reviewed-on: https://go-review.googlesource.com/12839
Reviewed-by: Andrew Gerrand <adg@golang.org>
diff --git a/2011/lex.slide b/2011/lex.slide
new file mode 100644
index 0000000..831e451
--- /dev/null
+++ b/2011/lex.slide
@@ -0,0 +1,343 @@
+Lexical Scanning in Go
+GTUG Sydney
+30 Aug 2011
+
+Rob Pike
+r@golang.org
+
+* Structural mismatch
+
+Many programming problems realign one data structure to fit another structure.
+
+- breaking text into lines
+- "blocking" and "deblocking"
+- packet assembly and disassembly
+- parsing
+- lexing
+
+* Sometimes hard
+
+The pieces on either side have independent state, lookahead, buffers, ...
+Can be messy to do well.
+
+Coroutines were invented to solve this problem!
+They enable us to write the two pieces independently.
+
+Let's look at this topic in the context of a lexer.
+
+
+* A new template system
+
+Wanted to replace the old Go template package.
+It had many problems:
+
+- inflexible
+- inexpressive
+- code too fragile
+
+* A new template system
+
+Key change was re-engineering with a true lexer, parser, and evaluator.
+Has arbitrary text plus actions in `{{` `}}`.
+
+.code lex/snippets /Evaluation/,/Control.structures/
+
+* Today we focus on the lexer
+
+Must tokenize:
+
+- the stuff outside actions
+- action delimiters
+- identifiers
+- numeric constants
+- string constants
+- and others
+
+* Lex items
+
+Two things identify each lexed item:
+
+- its type
+- its value; a string is all we need
+
+.code lex/lex1.oldgo /item.represents/,/^}/
+
+* Lex type
+
+The type is just an integer constant.
+We use `iota` to define the values.
+
+.code lex/lex1.oldgo /itemType.identifies/,/type/
+.code lex/lex1.oldgo /const/,/itemEOF/
+
+* Lex type values (continued)
+
+.code lex/lex1.oldgo /itemElse/,/^\)/
+
+* Printing a lex item
+
+`Printf` has a convention making it easy to print any type: just define a `String()` method:
+
+.code lex/lex1.oldgo /func.*item.*String/,/^}/
+
+* How to tokenize?
+
+Many approaches available:
+
+- use a tool such as lex or ragel
+- use regular expressions
+- use states, actions, and a switch statement
+
+* Tools
+
+Nothing wrong with using a tool but:
+
+- hard to get good errors (can be very important)
+- tend to require learning another language
+- result can be large, even slow
+- often a poor fit
+- but lexing is easy to do yourself!
+
+* Regular expressions
+
+Blogged about this last week.
+
+- overkill
+- slow
+- can explore the state space too much
+- misuse of a dynamic engine to ask static questions
+
+* Let's write our own
+
+It's easy!
+
+Plus, most programming languages lex pretty much the same tokens, so once we learn how it's trivial to adapt the lexer for the next purpose.
+
+- an argument both for and against tools
+
+* State machine
+
+Many people will tell you to write a switch statement,
+something like this:
+
+.code lex/snippets /One/,/^}/
+
+* State machines are forgetful
+
+Boring and repetitive and error-prone, but anyway:
+
+Why switch?
+
+After each action, you know where you want to be;
+the new state is the result of the action.
+
+But we throw the info away and recompute it from the state.
+
+(A consequence of returning to the caller.)
+
+A tool can compile that out, but so can we.
+
+* What is a state? An action?
+
+State represents where we are and what we expect.
+
+Action represents what we do.
+
+Actions result in a new state.
+
+* State function
+
+Let's put them together: a state function.
+
+Executes an action, returns the next state—as a state function.
+
+Recursive definition but simple and clear.
+
+.code lex/lex1.oldgo /stateFn/,/type/
+
+* The run loop
+
+Our state machine is trivial:
+just run until the state goes to `nil`, representing "done".
+
+.code lex/snippets /run.lexes/,/^}/
+
+* The concurrent step
+
+How do we make tokens available to the client?
+Tokens can emerge at times that are inconvenient to stop to return to the caller.
+
+Use concurrency:
+Run the state machine as a goroutine,
+emit values on a channel.
+
+* The lexer type
+
+Here is the `lexer` type. Notice the channel of items; ignore the rest for now.
+
+.code lex/lex1.oldgo /lexer.holds/,/^}/
+
+* Starting the lexer
+
+A `lexer` initializes itself to lex a string and launches the state machine as a goroutine, returning the lexer itself and a channel of items.
+
+The API will change, don't worry about it now.
+
+.code lex/lex1.oldgo /func.lex/,/^}/
+
+* The real run routine
+
+Here's the real state machine run function, which runs as a goroutine.
+
+.code lex/lex1.oldgo /run.lexes/,/^}/
+
+* The token emitter
+
+A token is a type and a value, but (yay Go) the value can just be sliced from the input string.
+The `lexer` remembers where it is in the input and the emit routine just lobs that substring to the caller as the token's value.
+
+.code lex/lex1.oldgo /input.*scanned/,/pos.*position/
+.code lex/lex1.oldgo /emit.passes/,/^}/
+
+* Starting the machine
+
+As the `lexer` begins it's looking for plain text, so the initial state is the function `lexText`.
+It absorbs plain text until a "left meta" is encountered.
+
+.code lex/lex1.oldgo /run.lexes/,/^}/
+.code lex/lex1.oldgo /leftMeta/
+
+* lexText
+
+.code lex/lex1.oldgo /^func.lexText/,/^}/
+
+* lexLeftMeta
+
+A trivial state function.
+When we get here, we know there's a `leftMeta` in the input.
+
+.code lex/lex1.oldgo /^func.lexLeftMeta/,/^}/
+
+* lexInsideAction
+
+.code lex/lex1.oldgo /^func.lexInsideAction/,/itemPipe/
+
+* More of lexInsideAction
+
+This will give you the flavor.
+
+.code lex/lex1.oldgo /case.*"/,/lexRawQuote/
+.code lex/lex1.oldgo /case.*9/,/lexIdentifier/
+
+* The next function
+
+.code lex/lex1.oldgo /next.returns.the/,/^}/
+
+* Some lexing helpers
+
+.code lex/lex1.oldgo /ignore.skips/,/^}/
+.code lex/lex1.oldgo /backup.steps/,/^}/
+
+* The peek function
+
+.code lex/lex1.oldgo /peek.returns.but/,/^}/
+
+* The accept functions
+
+.code lex/lex1.oldgo /accept.consumes/,/^}/
+.code lex/lex1.oldgo /acceptRun.consumes/,/^}/
+
+* Lexing a number, including floating point
+
+.code lex/lex1.oldgo /^func.lexNumber/,/imaginary/
+
+* Lexing a number, continued
+
+This is more accepting than it should be, but not by much. Caller must call `Atof` to validate.
+
+.code lex/lex1.oldgo /Is.it.imaginary/,/^}/
+
+* Errors
+
+Easy to handle: emit the bad token and shut down the machine.
+
+.code lex/lex1.oldgo /error.returns/,/^}/
+
+* Summary
+
+Concurrency makes the lexer easy to design.
+
+Goroutines allow lexer and caller (parser) each to run at its own rate, as clean sequential code.
+
+Channels give us a clean way to emit tokens.
+
+* A problem
+
+Can't run a goroutine to completion during initialization.
+Forbidden by the language specification.
+(Raises awful issues about order of init, best avoided.)
+
+That means we can't lex & parse a template during init.
+
+The goroutine is a problem....
+
+_(Note:_This_restriction_was_lifted_in_Go_version_1_but_the_discussion_is_still_interesting.)_
+
+* Design vs. implementation
+
+…but it's not necessary anyway.
+
+The work is done by the design; now we just adjust the API.
+
+We can change the API to hide the channel, provide a function to get the next token, and rewrite the run function.
+
+It's easy.
+
+* A new API
+
+Hide the channel and buffer it slightly, turning it into a ring buffer.
+
+.code lex/r59-lex.go /lex.creates.a.new/,/^}/
+
+* A function for the next item
+
+Traditional lexer API: return next item.
+Includes the modified state machine runner.
+.
+.code lex/r59-lex.go /nextItem.returns/,/^}/
+
+* That's it
+
+We now have a traditional API for a lexer with a simple, concurrent implementation under the covers.
+
+Even though the implementation is no longer truly concurrent, it still has all the advantages of concurrent design.
+
+We wouldn't have such a clean, efficient design if we hadn't thought about the problem in a concurrent way, without worrying about "restart".
+
+Model completely removes concerns about "structural mismatch".
+
+* Concurrency is a design approach
+
+Concurrency is not about parallelism.
+
+(Although it can enable parallelism).
+
+Concurrency is a way to design a program by decomposing it into independently executing pieces.
+
+The result can be clean, efficient, and very adaptable.
+
+* Conclusion
+
+Lexers are fun.
+
+Concurrency is fun.
+
+Go is fun.
+
+* For more information
+
+Go: [[http://golang.org]]
+
+New templates: http://golang.org/pkg/exp/template/
+
+(Next release will move them out of experimental.)
diff --git a/2011/lex/lex1.oldgo b/2011/lex/lex1.oldgo
new file mode 100644
index 0000000..0a1b1d4
--- /dev/null
+++ b/2011/lex/lex1.oldgo
@@ -0,0 +1,386 @@
+// Copyright 2011 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// reformatted, slightly edited version of lex.go from weekly.11-06-23
+
+package template
+
+import (
+ "fmt"
+ "strings"
+ "unicode"
+ "utf8"
+)
+
+// item represents a token returned from the scanner.
+type item struct {
+ typ itemType // Type, such as itemNumber.
+ val string // Value, such as "23.2".
+}
+
+func (i item) String() string {
+ switch i.typ {
+ case itemEOF:
+ return "EOF"
+ case itemError:
+ return i.val
+ }
+ if len(i.val) > 10 {
+ return fmt.Sprintf("%.10q...", i.val)
+ }
+ return fmt.Sprintf("%q", i.val)
+}
+
+// itemType identifies the type of lex items.
+type itemType int
+
+const (
+ itemError itemType = iota // error occurred;
+ // value is text of error
+ itemDot // the cursor, spelled '.'
+ itemEOF
+ itemElse // else keyword
+ itemEnd // end keyword
+ itemField // identifier, starting with '.'
+ itemIdentifier // identifier
+ itemIf // if keyword
+ itemLeftMeta // left meta-string
+ itemNumber // number
+ itemPipe // pipe symbol
+ itemRange // range keyword
+ itemRawString // raw quoted string (includes quotes)
+ itemRightMeta // right meta-string
+ itemString // quoted string (includes quotes)
+ itemText // plain text
+)
+
+// Make the types prettyprint.
+var itemName = map[itemType]string{
+ itemError: "error",
+ itemDot: ".",
+ itemEOF: "EOF",
+ itemElse: "else",
+ itemEnd: "end",
+ itemField: "field",
+ itemIdentifier: "identifier",
+ itemIf: "if",
+ itemLeftMeta: "left meta",
+ itemNumber: "number",
+ itemPipe: "pipe",
+ itemRange: "range",
+ itemRawString: "raw string",
+ itemRightMeta: "rightMeta",
+ itemString: "string",
+ itemText: "text",
+}
+
+func (i itemType) String() string {
+ s := itemName[i]
+ if s == "" {
+ return fmt.Sprintf("item%d", int(i))
+ }
+ return s
+}
+
+var key = map[string]itemType{
+ ".": itemDot,
+ "else": itemElse,
+ "end": itemEnd,
+ "if": itemIf,
+ "range": itemRange,
+}
+
+const eof = -1
+
+// stateFn represents the state of the scanner
+// as a function that returns the next state.
+type stateFn func(*lexer) stateFn
+
+// lexer holds the state of the scanner.
+type lexer struct {
+ name string // used only for error reports.
+ input string // the string being scanned.
+ start int // start position of this item.
+ pos int // current position in the input.
+ width int // width of last rune read from input.
+ items chan item // channel of scanned items.
+}
+
+// next returns the next rune in the input.
+func (l *lexer) next() (rune int) {
+ if l.pos >= len(l.input) {
+ l.width = 0
+ return eof
+ }
+ rune, l.width =
+ utf8.DecodeRuneInString(l.input[l.pos:])
+ l.pos += l.width
+ return rune
+}
+
+// peek returns but does not consume
+// the next rune in the input.
+func (l *lexer) peek() int {
+ rune := l.next()
+ l.backup()
+ return rune
+}
+
+// backup steps back one rune.
+// Can be called only once per call of next.
+func (l *lexer) backup() {
+ l.pos -= l.width
+}
+
+// emit passes an item back to the client.
+func (l *lexer) emit(t itemType) {
+ l.items <- item{t, l.input[l.start:l.pos]}
+ l.start = l.pos
+}
+
+// ignore skips over the pending input before this point.
+func (l *lexer) ignore() {
+ l.start = l.pos
+}
+
+// accept consumes the next rune
+// if it's from the valid set.
+func (l *lexer) accept(valid string) bool {
+ if strings.IndexRune(valid, l.next()) >= 0 {
+ return true
+ }
+ l.backup()
+ return false
+}
+
+// acceptRun consumes a run of runes from the valid set.
+func (l *lexer) acceptRun(valid string) {
+ for strings.IndexRune(valid, l.next()) >= 0 {
+ }
+ l.backup()
+}
+
+// lineNumber reports which line we're on. Doing it this way
+// means we don't have to worry about peek double counting.
+func (l *lexer) lineNumber() int {
+ return 1 + strings.Count(l.input[:l.pos], "\n")
+}
+
+// error returns an error token and terminates the scan
+// by passing back a nil pointer that will be the next
+// state, terminating l.run.
+func (l *lexer) errorf(format string, args ...interface{})
+ stateFn {
+ l.items <- item{
+ itemError,
+ fmt.Sprintf(format, args...),
+ }
+ return nil
+}
+
+// run lexes the input by executing state functions until
+// the state is nil.
+func (l *lexer) run() {
+ for state := lexText; state != nil; {
+ state = state(l)
+ }
+ close(l.items) // No more tokens will be delivered.
+}
+
+// lex launches a new scanner and returns the channel of items.
+func lex(name, input string) (*lexer, chan item) {
+ l := &lexer{
+ name: name,
+ input: input,
+ items: make(chan item),
+ }
+ go l.run() // Concurrently run state machine.
+ return l, l.items
+}
+
+// state functions
+
+const leftMeta = "{{"
+const rightMeta = "}}"
+
+// lexText scans until a metacharacter
+func lexText(l *lexer) stateFn {
+ for {
+ if strings.HasPrefix(l.input[l.pos:], leftMeta) {
+ if l.pos > l.start {
+ l.emit(itemText)
+ }
+ return lexLeftMeta // Next state.
+ }
+ if l.next() == eof { break }
+ }
+ // Correctly reached EOF.
+ if l.pos > l.start {
+ l.emit(itemText)
+ }
+ l.emit(itemEOF) // Useful to make EOF a token.
+ return nil // Stop the run loop.
+}
+
+// leftMeta scans the left "metacharacter", which is known to be present.
+func lexLeftMeta(l *lexer) stateFn {
+ l.pos += len(leftMeta)
+ l.emit(itemLeftMeta)
+ return lexInsideAction // Now inside {{ }}.
+}
+
+// rightMeta scans the right "metacharacter", which is known to be present.
+func lexRightMeta(l *lexer) stateFn {
+ l.pos += len(rightMeta)
+ l.emit(itemRightMeta)
+ return lexText
+}
+
+// lexInsideAction scans the elements inside "metacharacters".
+func lexInsideAction(l *lexer) stateFn {
+ // Either number, quoted string, or identifier.
+ // Spaces separate and are ignored.
+ // Pipe symbols separate and are emitted.
+ for {
+ if strings.HasPrefix(l.input[l.pos:], rightMeta) {
+ return lexRightMeta
+ }
+ switch r := l.next(); {
+ case r == eof || r == '\n':
+ return l.errorf("unclosed action")
+ case isSpace(r):
+ l.ignore()
+ case r == '|':
+ l.emit(itemPipe)
+ case r == '"':
+ return lexQuote
+ case r == '`':
+ return lexRawQuote
+ case r == '.':
+ // special look-ahead for ".field" so we don't break l.backup().
+ if l.pos < len(l.input) {
+ r := l.input[l.pos]
+ if r < '0' || '9' < r {
+ return lexIdentifier // itemDot comes from the keyword table.
+ }
+ }
+ fallthrough // '.' can start a number.
+ case r == '+' || r == '-' || '0' <= r && r <= '9':
+ l.backup()
+ return lexNumber
+ case isAlphaNumeric(r):
+ l.backup()
+ return lexIdentifier
+ default:
+ return l.errorf("unrecognized character in action: %#U", r)
+ }
+ }
+ return nil
+}
+
+// lexIdentifier scans an alphanumeric or field.
+func lexIdentifier(l *lexer) stateFn {
+Loop:
+ for {
+ switch r := l.next(); {
+ case isAlphaNumeric(r):
+ // absorb
+ default:
+ l.backup()
+ word := l.input[l.start:l.pos]
+ switch {
+ case key[word] != itemError:
+ l.emit(key[word])
+ case word[0] == '.':
+ l.emit(itemField)
+ default:
+ l.emit(itemIdentifier)
+ }
+ break Loop
+ }
+ }
+ return lexInsideAction
+}
+
+// lexNumber scans a number: decimal, octal, hex, float, or imaginary. This
+// isn't a perfect number scanner - for instance it accepts "." and "0x0.2"
+// and "089" - but when it's wrong the input is invalid and the parser (via
+// strconv) will notice.
+// TODO: without expressions you can do imaginary but not complex.
+func lexNumber(l *lexer) stateFn {
+ // Optional leading sign.
+ l.accept("+-")
+ // Is it hex?
+ digits := "0123456789"
+ if l.accept("0") && l.accept("xX") {
+ digits = "0123456789abcdefABCDEF"
+ }
+ l.acceptRun(digits)
+ if l.accept(".") {
+ l.acceptRun(digits)
+ }
+ if l.accept("eE") {
+ l.accept("+-")
+ l.acceptRun("0123456789")
+ }
+ // Is it imaginary?
+ l.accept("i")
+ // Next thing mustn't be alphanumeric.
+ if isAlphaNumeric(l.peek()) {
+ l.next()
+ return l.errorf("bad number syntax: %q",
+ l.input[l.start:l.pos])
+ }
+ l.emit(itemNumber)
+ return lexInsideAction
+}
+
+// lexQuote scans a quoted string.
+func lexQuote(l *lexer) stateFn {
+Loop:
+ for {
+ switch l.next() {
+ case '\\':
+ if r := l.next(); r != eof && r != '\n' {
+ break
+ }
+ fallthrough
+ case eof, '\n':
+ return l.errorf("unterminated quoted string")
+ case '"':
+ break Loop
+ }
+ }
+ l.emit(itemString)
+ return lexInsideAction
+}
+
+// lexRawQuote scans a raw quoted string.
+func lexRawQuote(l *lexer) stateFn {
+Loop:
+ for {
+ switch l.next() {
+ case eof, '\n':
+ return l.errorf("unterminated raw quoted string")
+ case '`':
+ break Loop
+ }
+ }
+ l.emit(itemRawString)
+ return lexInsideAction
+}
+
+// isSpace reports whether r is a space character.
+func isSpace(r int) bool {
+ switch r {
+ case ' ', '\t', '\n', '\r':
+ return true
+ }
+ return false
+}
+
+// isAlphaNumeric reports whether r is an alphabetic, digit, or underscore.
+func isAlphaNumeric(r int) bool {
+ return r == '_' || unicode.IsLetter(r) || unicode.IsDigit(r)
+}
diff --git a/2011/lex/r59-lex.go b/2011/lex/r59-lex.go
new file mode 100644
index 0000000..16ac257
--- /dev/null
+++ b/2011/lex/r59-lex.go
@@ -0,0 +1,431 @@
+// Copyright 2011 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package template
+
+import (
+ "fmt"
+ "strings"
+ "unicode"
+ "utf8"
+)
+
+// item represents a token or text string returned from the scanner.
+type item struct {
+ typ itemType
+ val string
+}
+
+func (i item) String() string {
+ switch {
+ case i.typ == itemEOF:
+ return "EOF"
+ case i.typ == itemError:
+ return i.val
+ case i.typ > itemKeyword:
+ return fmt.Sprintf("<%s>", i.val)
+ case len(i.val) > 10:
+ return fmt.Sprintf("%.10q...", i.val)
+ }
+ return fmt.Sprintf("%q", i.val)
+}
+
+// itemType identifies the type of lex items.
+type itemType int
+
+const (
+ itemError itemType = iota // error occurred; value is text of error
+ itemBool // boolean constant
+ itemComplex // complex constant (1+2i); imaginary is just a number
+ itemEOF
+ itemField // alphanumeric identifier, starting with '.', possibly chained ('.x.y')
+ itemIdentifier // alphanumeric identifier
+ itemLeftDelim // left action delimiter
+ itemNumber // simple number, including imaginary
+ itemPipe // pipe symbol
+ itemRawString // raw quoted string (includes quotes)
+ itemRightDelim // right action delimiter
+ itemString // quoted string (includes quotes)
+ itemText // plain text
+ // Keywords appear after all the rest.
+ itemKeyword // used only to delimit the keywords
+ itemDot // the cursor, spelled '.'.
+ itemDefine // define keyword
+ itemElse // else keyword
+ itemEnd // end keyword
+ itemIf // if keyword
+ itemRange // range keyword
+ itemTemplate // template keyword
+ itemWith // with keyword
+)
+
+// Make the types prettyprint.
+var itemName = map[itemType]string{
+ itemError: "error",
+ itemBool: "bool",
+ itemComplex: "complex",
+ itemEOF: "EOF",
+ itemField: "field",
+ itemIdentifier: "identifier",
+ itemLeftDelim: "left delim",
+ itemNumber: "number",
+ itemPipe: "pipe",
+ itemRawString: "raw string",
+ itemRightDelim: "right delim",
+ itemString: "string",
+ // keywords
+ itemDot: ".",
+ itemDefine: "define",
+ itemElse: "else",
+ itemIf: "if",
+ itemEnd: "end",
+ itemRange: "range",
+ itemTemplate: "template",
+ itemWith: "with",
+}
+
+func (i itemType) String() string {
+ s := itemName[i]
+ if s == "" {
+ return fmt.Sprintf("item%d", int(i))
+ }
+ return s
+}
+
+var key = map[string]itemType{
+ ".": itemDot,
+ "define": itemDefine,
+ "else": itemElse,
+ "end": itemEnd,
+ "if": itemIf,
+ "range": itemRange,
+ "template": itemTemplate,
+ "with": itemWith,
+}
+
+const eof = -1
+
+// stateFn represents the state of the scanner as a function that returns the next state.
+type stateFn func(*lexer) stateFn
+
+// lexer holds the state of the scanner.
+type lexer struct {
+ name string // the name of the input; used only for error reports.
+ input string // the string being scanned.
+ state stateFn // the next lexing function to enter
+ pos int // current position in the input.
+ start int // start position of this item.
+ width int // width of last rune read from input.
+ items chan item // channel of scanned items.
+}
+
+// next returns the next rune in the input.
+func (l *lexer) next() (rune int) {
+ if l.pos >= len(l.input) {
+ l.width = 0
+ return eof
+ }
+ rune, l.width = utf8.DecodeRuneInString(l.input[l.pos:])
+ l.pos += l.width
+ return rune
+}
+
+// peek returns but does not consume the next rune in the input.
+func (l *lexer) peek() int {
+ rune := l.next()
+ l.backup()
+ return rune
+}
+
+// backup steps back one rune. Can only be called once per call of next.
+func (l *lexer) backup() {
+ l.pos -= l.width
+}
+
+// emit passes an item back to the client.
+func (l *lexer) emit(t itemType) {
+ l.items <- item{t, l.input[l.start:l.pos]}
+ l.start = l.pos
+}
+
+// ignore skips over the pending input before this point.
+func (l *lexer) ignore() {
+ l.start = l.pos
+}
+
+// accept consumes the next rune if it's from the valid set.
+func (l *lexer) accept(valid string) bool {
+ if strings.IndexRune(valid, l.next()) >= 0 {
+ return true
+ }
+ l.backup()
+ return false
+}
+
+// acceptRun consumes a run of runes from the valid set.
+func (l *lexer) acceptRun(valid string) {
+ for strings.IndexRune(valid, l.next()) >= 0 {
+ }
+ l.backup()
+}
+
+// lineNumber reports which line we're on. Doing it this way
+// means we don't have to worry about peek double counting.
+func (l *lexer) lineNumber() int {
+ return 1 + strings.Count(l.input[:l.pos], "\n")
+}
+
+// error returns an error token and terminates the scan by passing
+// back a nil pointer that will be the next state, terminating l.run.
+func (l *lexer) errorf(format string, args ...interface{}) stateFn {
+ l.items <- item{itemError, fmt.Sprintf(format, args...)}
+ return nil
+}
+
+// nextItem returns the next item from the input.
+func (l *lexer) nextItem() item {
+ for {
+ select {
+ case item := <-l.items:
+ return item
+ default:
+ l.state = l.state(l)
+ }
+ }
+ panic("not reached")
+}
+
+// lex creates a new scanner for the input string.
+func lex(name, input string) *lexer {
+ l := &lexer{
+ name: name,
+ input: input,
+ state: lexText,
+ items: make(chan item, 2), // Two items sufficient.
+ }
+ return l
+}
+
+// state functions
+
+const (
+ leftDelim = "{{"
+ rightDelim = "}}"
+ leftComment = "{{/*"
+ rightComment = "*/}}"
+)
+
+// lexText scans until an opening action delimiter, "{{".
+func lexText(l *lexer) stateFn {
+ for {
+ if strings.HasPrefix(l.input[l.pos:], leftDelim) {
+ if l.pos > l.start {
+ l.emit(itemText)
+ }
+ return lexLeftDelim
+ }
+ if l.next() == eof {
+ break
+ }
+ }
+ // Correctly reached EOF.
+ if l.pos > l.start {
+ l.emit(itemText)
+ }
+ l.emit(itemEOF)
+ return nil
+}
+
+// lexLeftDelim scans the left delimiter, which is known to be present.
+func lexLeftDelim(l *lexer) stateFn {
+ if strings.HasPrefix(l.input[l.pos:], leftComment) {
+ return lexComment
+ }
+ l.pos += len(leftDelim)
+ l.emit(itemLeftDelim)
+ return lexInsideAction
+}
+
+// lexComment scans a comment. The left comment marker is known to be present.
+func lexComment(l *lexer) stateFn {
+ i := strings.Index(l.input[l.pos:], rightComment)
+ if i < 0 {
+ return l.errorf("unclosed comment")
+ }
+ l.pos += i + len(rightComment)
+ l.ignore()
+ return lexText
+}
+
+// lexRightDelim scans the right delimiter, which is known to be present.
+func lexRightDelim(l *lexer) stateFn {
+ l.pos += len(rightDelim)
+ l.emit(itemRightDelim)
+ return lexText
+}
+
+// lexInsideAction scans the elements inside action delimiters.
+func lexInsideAction(l *lexer) stateFn {
+ // Either number, quoted string, or identifier.
+ // Spaces separate and are ignored.
+ // Pipe symbols separate and are emitted.
+ for {
+ if strings.HasPrefix(l.input[l.pos:], rightDelim) {
+ return lexRightDelim
+ }
+ switch r := l.next(); {
+ case r == eof || r == '\n':
+ return l.errorf("unclosed action")
+ case isSpace(r):
+ l.ignore()
+ case r == '|':
+ l.emit(itemPipe)
+ case r == '"':
+ return lexQuote
+ case r == '`':
+ return lexRawQuote
+ case r == '.':
+ // special look-ahead for ".field" so we don't break l.backup().
+ if l.pos < len(l.input) {
+ r := l.input[l.pos]
+ if r < '0' || '9' < r {
+ return lexIdentifier // itemDot comes from the keyword table.
+ }
+ }
+ fallthrough // '.' can start a number.
+ case r == '+' || r == '-' || ('0' <= r && r <= '9'):
+ l.backup()
+ return lexNumber
+ case isAlphaNumeric(r):
+ l.backup()
+ return lexIdentifier
+ default:
+ return l.errorf("unrecognized character in action: %#U", r)
+ }
+ }
+ return nil
+}
+
+// lexIdentifier scans an alphanumeric or field.
+func lexIdentifier(l *lexer) stateFn {
+Loop:
+ for {
+ switch r := l.next(); {
+ case isAlphaNumeric(r):
+ // absorb.
+ case r == '.' && l.input[l.start] == '.':
+ // field chaining; absorb into one token.
+ default:
+ l.backup()
+ word := l.input[l.start:l.pos]
+ switch {
+ case key[word] > itemKeyword:
+ l.emit(key[word])
+ case word[0] == '.':
+ l.emit(itemField)
+ case word == "true", word == "false":
+ l.emit(itemBool)
+ default:
+ l.emit(itemIdentifier)
+ }
+ break Loop
+ }
+ }
+ return lexInsideAction
+}
+
+// lexNumber scans a number: decimal, octal, hex, float, or imaginary. This
+// isn't a perfect number scanner - for instance it accepts "." and "0x0.2"
+// and "089" - but when it's wrong the input is invalid and the parser (via
+// strconv) will notice.
+func lexNumber(l *lexer) stateFn {
+ if !l.scanNumber() {
+ return l.errorf("bad number syntax: %q", l.input[l.start:l.pos])
+ }
+ if sign := l.peek(); sign == '+' || sign == '-' {
+ // Complex: 1+2i. No spaces, must end in 'i'.
+ if !l.scanNumber() || l.input[l.pos-1] != 'i' {
+ return l.errorf("bad number syntax: %q", l.input[l.start:l.pos])
+ }
+ l.emit(itemComplex)
+ } else {
+ l.emit(itemNumber)
+ }
+ return lexInsideAction
+}
+
+func (l *lexer) scanNumber() bool {
+ // Optional leading sign.
+ l.accept("+-")
+ // Is it hex?
+ digits := "0123456789"
+ if l.accept("0") && l.accept("xX") {
+ digits = "0123456789abcdefABCDEF"
+ }
+ l.acceptRun(digits)
+ if l.accept(".") {
+ l.acceptRun(digits)
+ }
+ if l.accept("eE") {
+ l.accept("+-")
+ l.acceptRun("0123456789")
+ }
+ // Is it imaginary?
+ l.accept("i")
+ // Next thing mustn't be alphanumeric.
+ if isAlphaNumeric(l.peek()) {
+ l.next()
+ return false
+ }
+ return true
+}
+
+// lexQuote scans a quoted string.
+func lexQuote(l *lexer) stateFn {
+Loop:
+ for {
+ switch l.next() {
+ case '\\':
+ if r := l.next(); r != eof && r != '\n' {
+ break
+ }
+ fallthrough
+ case eof, '\n':
+ return l.errorf("unterminated quoted string")
+ case '"':
+ break Loop
+ }
+ }
+ l.emit(itemString)
+ return lexInsideAction
+}
+
+// lexRawQuote scans a raw quoted string.
+func lexRawQuote(l *lexer) stateFn {
+Loop:
+ for {
+ switch l.next() {
+ case eof, '\n':
+ return l.errorf("unterminated raw quoted string")
+ case '`':
+ break Loop
+ }
+ }
+ l.emit(itemRawString)
+ return lexInsideAction
+}
+
+// isSpace reports whether r is a space character.
+func isSpace(r int) bool {
+ switch r {
+ case ' ', '\t', '\n', '\r':
+ return true
+ }
+ return false
+}
+
+// isAlphaNumeric reports whether r is an alphabetic, digit, or underscore.
+func isAlphaNumeric(r int) bool {
+ return r == '_' || unicode.IsLetter(r) || unicode.IsDigit(r)
+}
diff --git a/2011/lex/snippets b/2011/lex/snippets
new file mode 100644
index 0000000..c00822b
--- /dev/null
+++ b/2011/lex/snippets
@@ -0,0 +1,21 @@
+Evaluation: {{.Title}}
+Constants and functions: {{printf "%g: %#3X" 1.2+2i 123}}
+Control structures {{range $s.Text}} {{.}} {{end}}
+
+// One iteration:
+switch state {
+case state1:
+ state = action1()
+case state2:
+ state = action2()
+case state3:
+ state = action3()
+}
+
+// run lexes the input by executing state functions
+// until the state is nil.
+func run() {
+ for state := startState; state != nil; {
+ state = state(lexer)
+ }
+}