Now that we’ve supplemented our existing numbers with variables and input, we’d like to manipulate them more freely, by using them together in expressions.
In BASIC, an expression is a sequence of arguments and operations that can be evaluated to produce a result. Roughly, a BASIC expression is one that relies on mathematical or logical operators:
1 2  #lang basicdemo2 10 print 1 + 2  3 * (4 + 5) / 6 ^ 7 + 8 mod 9 
1  8.999903549382717 
Unlike Racket, BASIC is not made exclusively of expressions. Rather, it’s a mix of statements (like print) and expressions. Also, BASIC relies on standard infix notation (with the operations written between the arguments) as opposed to Racket’s prefix notation.
At the end of variables and input, we left our parser like this, pledging that we would improve our casual handling of expressions and numbers:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16  #lang brag bprogram : [bline] (/NEWLINE [bline])* bline : blinenum [bstatement] (/":" [bstatement])* [brem] @blinenum : INTEGER @bstatement : bend  bprint  bgoto  blet  binput brem : REM bend : /"end" bprint : /"print" [bprintable] (/";" [bprintable])* @bprintable : STRING  bexpr bgoto : /"goto" bexpr blet : [/"let"] bid /"=" (bexpr  STRING) binput : /"input" bid @bid : ID bexpr : bsum bsum : bnumber (/"+" bnumber)* @bnumber : INTEGER  DECIMAL  bid 
So far, we’ve made some simplifying but unrealistic assumptions. Let’s look at the bexpr rule near the bottom. The only kind of expression we support is bsum, which is a series of bnumber elements with + between them:
1  1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 
Once parsed, the bnumber nodes are spliced, so in the parse tree, this expression becomes:
1  (bsum 1 2 3 4 5 6 7 8) 
Our corresponding bsum function in "expr.rkt" is likewise casual:
Of course, real BASIC expressions aren’t limited to flat lists of values, summed together. We have to support gnarlier expressions with a variety of operations, and subexpressions within, like the one we saw above:
1  1 + 2  3 * (4 + 5) / 6 ^ 7 + 8 mod 9 
To do this, we obviously have to add some new math operators—namely  (subtraction), * (multiplication), / (division), ^ (exponentiation), and mod (modulo). But we also need to confront some subtler considerations previously avoided.
Throughout the tutorials we’ve noted the importance of choosing wisely when modeling elements of a programming language with Racket. Usually, the best choice is the one that’s most analogous. For example, in variables and input, we implemented BASIC variables directly as Racket variables (rather than indirectly, as a Racket hash table).
This time, however, we have to model infix expressions, which have no direct analog in Racket—in fact they don’t exist at all. + Racket’s limited infix notation is just prefix notation, cosmetically rearranged.
Moreover, we can’t just casually convert infix expressions to Racket expressions, the way we did with bsum. Infix expressions are evaluated according to certain rules:
Order: operations within an infix expression are applied from left to right. So a infix expression like this:
1  1 + 2  3 + 4  5 + 6 
Corresponds to this Racket expression, where the lefttoright infix operations are converted to innertoouter prefix operations:
Precedence: certain operations need to be applied before others. We probably remember this from sixthgrade math, but just in case—if we have an infix expression like this:
1  1 + 2 * 3 + 4 * 5 + 6 
The * operations have higher precedence than the +. So the expression is evaluated as if it were written like so, and then the operations are applied in the usual lefttoright way:
1  1 + (2 * 3) + (4 * 5) + 6 
In Racket, we would write the original expression like so:
Subexpressions: the usual precedence rules can be overridden by using parenthesized subexpressions. In effect, subexpressions are another layer of precedence. Starting with the previous example, we can force the addition operations to happen first by parenthesizing them, and then the remaining operations go left to right:
1  (1 + 2) * (3 + 4) * (5 + 6) 
In Racket, we’d write the expression this way:
Taking these rules together, our original gnarly example:
1  1 + 2  3 * (4 + 5) / 6 ^ 7 + 8 mod 9 
Would be written in Racket like so:
We need to figure out a method of transforming any BASIC infix expression into a Racket expression that correctly implements the ordering & precedence rules. Since Racket doesn’t directly support infix expressions, we have two choices:
Add support for infix expressions to Racket itself. Then in our BASIC interpreter, we can just pass through whole infix expressions, and our newandimproved Racket will do the rest.
This is actually a totally reasonable idea. As we know, Racket itself is designed to be extensible (for instance, with macros). We could create a tiny domainspecific language for interpreting infix expressions that we can invoke from within Racket. (Or even better, we could use one that already exists.)
We could also update our parser with new rules, so it automatically parses infix expressions into simpler nested expressions with the correct order and precedence. Then we update our expander to support the evaluation of these smaller expressions.
Since we already know how to make parser rules, this will be our approach.
We know that in Racket, runtime expressions are evaluated from innermost to outermost. We also know that when the parser descends into a new rule, it creates a subnode in the parse tree inside the current node. So every new rule moves us farther inward.
Therefore, we can indirectly control evaluation priority through parsing. Every time we want to give an expression more priority, we can invoke another parser rule, which will move that expression deeper in the parse tree.
As for infix operations, we can treat order and precedence of operations as two varieties of evaluation priority.
Ordering of operations can be created through recursive parsing rules.
Even though an infix expression can be indefinitely long, we can always break it into a series of twoargument computations—the current infix operator applied to the argument on its left and right. (This is how we would proceed with pencil & paper; it’s also how we converted our previous infix examples into Racket code.)
We already used this idea of dyadic decomposition in the stackerizer language. In that case, we converted variadic expressions into dyadic ones by recursively calling a macro.
In a parser, we can accomplish something similar by recursively calling a parsing rule. Suppose we have this grammar:
1 2 3 4  #lang brag program : sum sum : var (/"+" var)* @var : "a"  "b"  "c"  "d" 
And we parse the string a+b+c+d:
1 2  (require "grammar.rkt") (parsetodatum "a+b+c+d") 
We get this result:
1  '(program (sum "a" "b" "c" "d")) 
Now suppose we only want sum nodes that are dyadic at maximum. We can update the sum rule with a recursive reference, and remove the * quantifier so it only matches one or two elements:
1 2 3 4  #lang brag program : sum sum : var [/"+" sum] @var : "a"  "b"  "c"  "d" 
Now the parse tree changes:
1  '(program (sum "a" (sum "b" (sum "c" (sum "d"))))) 
Our recursive use of sum is on the right side of the pattern. Therefore, subnodes are formed around each righthand element. That means that we’re assigning evaluation priority from right to left: (sum "d") is evaluated, then (sum "c" ···), etc.
If we want to enforce lefttoright ordering, we can just swap the position of the recursive sum in the pattern:
1 2 3 4  #lang brag program : sum sum : [sum /"+"] var @var : "a"  "b"  "c"  "d" 
This produces a parse tree with lefttoright priority:
1  '(program (sum (sum (sum (sum "a") "b") "c") "d")) 
Precedence of operations can be created through a chained hierarchy of parsing rules. Again, a simple example. Suppose we have this grammar:
1 2 3 4  #lang brag program : op* op : [op ("+"  "*")] var @var : "a"  "b"  "c" 
And we parse the string a+b*c:
1 2  (require "grammar.rkt") (parsetodatum "a+b*c") 
We get this parse tree:
1  '(program (op (op (op "a") "+" "b") "*" "c")) 
Following the technique we learned above, we enforce lefttoright ordering by recursively calling the op rule as the left element in its pattern. This means that a+b will be evaluated first.
But suppose that instead, we want b*c to be evaluated first. If we change op to sum, and insert a mult subrule into our grammar:
1 2 3 4 5  #lang brag program : sum* sum : [sum "+"] mult mult : [mult "*"] var @var : "a"  "b"  "c" 
Then the b*c substring will be nested more deeply, and thus evaluated first:
1  '(program (sum (sum (mult "a")) "+" (mult (mult "b") "*" "c"))) 
The key maneuver here is slightly counterintuitive: we’re chaining our grammar rules so that every sum node contains mult subnodes, and every mult node lives inside a sum parent node. Our string a+b*c, which looks like it contains two operations, becomes five operations in our parse tree.
But we won’t panic. This chaining ensures that our multiplications always have higher evaluation priority than our sums, because they’re nested deeper in the parse tree. And the extra oneargument instances of our operations can be boiled out by the expander—e.g., (sum (mult "a")) can just become "a". If we ignore these trivial operations, the parse tree looks like this:
1  '(program (sum "a" "+" (mult "b" "*" "c"))) 
Which is exactly what we wanted.
Relying on these two parsing techniques, we can now implement BASIC infix expressions with order and precedence. We can also use these techniques to implement subexpressions, which we can think of as a further layer of precedence.
As always, we start by updating "lexer.rkt" to recognize the new tokens we’ll be using in the source. To the reservedterms list we add our other math operators (*/^ and mod), plus parentheses (to support subexpressions):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26  #lang br (require brag/support) (definelexabbrev digits (:+ (charset "0123456789"))) (definelexabbrev reservedterms (:or "print" "goto" "end" "+" ":" ";" "let" "=" "input" "" "*" "/" "^" "mod" "(" ")")) (define basiclexer (lexersrcloc ["\n" (token 'NEWLINE lexeme)] [whitespace (token lexeme #:skip? #t)] [(from/stopbefore "rem" "\n") (token 'REM lexeme)] [reservedterms (token lexeme lexeme)] [(:seq alphabetic (:* (:or alphabetic numeric "$"))) (token 'ID (string>symbol lexeme))] [digits (token 'INTEGER (string>number lexeme))] [(:or (:seq (:? digits) "." digits) (:seq digits ".")) (token 'DECIMAL (string>number lexeme))] [(:or (from/to "\"" "\"") (from/to "'" "'")) (token 'STRING (substring lexeme 1 (sub1 (stringlength lexeme))))])) (provide basiclexer) 
Next we add the parser rules that handle operator precedence. In BASIC, operations are ordered from lowest to highest precedence like so:
Addition (+) and subtraction ().
Multiplication (*), division (/), and modulo (mod).
Negation ( with one argument rather than two).
Exponentiation (^).
Parenthesized subexpressions.
Then, within a precedence level, operations are performed from left to right.
Following the approach we discovered in the last section, we introduce a series of chained parser rules corresponding to each level of precedence. Each of these rules will call itself recursively on the left side of its pattern to enforce lefttoright ordering:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20  #lang brag bprogram : [bline] (/NEWLINE [bline])* bline : blinenum [bstatement] (/":" [bstatement])* [brem] @blinenum : INTEGER @bstatement : bend  bprint  bgoto  blet  binput brem : REM bend : /"end" bprint : /"print" [bprintable] (/";" [bprintable])* @bprintable : STRING  bexpr bgoto : /"goto" bexpr blet : [/"let"] bid /"=" (bexpr  STRING) binput : /"input" bid @bid : ID bexpr : bsum bsum : [bsum ("+""")] bproduct bproduct : [bproduct ("*""/""mod")] bneg bneg : [""] bexpt bexpt : [bexpt "^"] bvalue @bvalue : bnumber  bid  /"(" bexpr /")" @bnumber : INTEGER  DECIMAL 
bsum parses both addition and subtraction operations; bproduct parses multiplication, division, and modulo operations. We definitely don’t want to break these individual operations into chained rules, because it would improperly create precedence among them. Rather, we’ll think of bsum and bproduct as “sets of operations at the same level of precedence” and sort out the details in the expander.
The negation rule bneg has no recursive element because it only takes one argument.
Near the bottom, we’ve introduced a new bvalue rule that encompasses numbers, identifiers, and subexpressions. The subexpression pattern recursively relies on bexpr, which means that expressions can be nested to any depth.
In "expr.rkt", we keep our previous bexpr function, but everything else will be replaced:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24  #lang br (provide bexpr bsum bproduct bneg bexpt) (define (bexpr expr) (if (integer? expr) (inexact>exact expr) expr)) (definemacrocases bsum [(_ VAL) #'VAL] [(_ LEFT "+" RIGHT) #'(+ LEFT RIGHT)] [(_ LEFT "" RIGHT) #'( LEFT RIGHT)]) (definemacrocases bproduct [(_ VAL) #'VAL] [(_ LEFT "*" RIGHT) #'(* LEFT RIGHT)] [(_ LEFT "/" RIGHT) #'(/ LEFT RIGHT 1.0)] [(_ LEFT "mod" RIGHT) #'(modulo LEFT RIGHT)]) (definemacrocases bneg [(_ VAL) #'VAL] [(_ "" VAL) #'( VAL)]) (definemacrocases bexpt [(_ VAL) #'VAL] [(_ LEFT "^" RIGHT) #'(expt LEFT RIGHT)]) 
In each of the macros implementing the parsetree rules, we start by handling the trivial oneargument case. After that, we match the operation token, and convert the expression to the corresponding Racket function.
We also introduce a tiny bit of new notation: in a syntax pattern, we can use an underscore _ to mean “match anything”. We use it in our macro patterns so that we can avoid incessantly repeating the name of the macro, which we know always appears first. + The underscore is commonly used to indicate something in a pattern that we plan to ignore. We used analogous notation in match when we wrote the syntax colorer.
Everything is straightforward except division. Unlike Racket, BASIC has no concept of an exact rational fraction. It only uses floatingpoint fractions. So in the division expression, we include 1.0 to force a floatingpoint result.
First, let’s try our original gnarly expression:
1 2  #lang basic 10 print 1 + 2  3 * (4 + 5) / 6 ^ 7 + 8 mod 9 
1  8.999903549382717

And compare it to the equivalent Racket code:
1  8.999903549382717

A promising start. Let’s now test our precedence rules, by making expressions that have higherprecedence operations on the right:
1 2 3 4 5 6  #lang basic 10 print 1 + 2 * 3 rem 7 20 print (1 + 2) * 3 rem 9 30 print 3 +  4 rem 1 40 print  2 ^ 4 rem 16 50 print ( 2) ^ 4 rem 16 
1 2 3 4 5  7 9 1 16 16 
Let’s also see if our lefttoright ordering works:
1 2 3 4 5  #lang basic 10 print 24 / 4 / 2 rem 3 20 print 24 / (4 / 2) rem 12 30 print 2 ^ 3 ^ 2 rem 64 40 print 2 ^ (3 ^ 2) rem 512 
1 2 3 4  3 12 64 512 
We also check that we can do the same operations with variables instead of numbers:
1 2 3 4 5 6  #lang basic 5 x = 24 : d = 4 : b = 2 : c = 3 10 print x / d / b rem 3 20 print x / (d / b) rem 12 30 print b ^ c ^ b rem 64 40 print b ^ (c ^ b) rem 512 
1 2 3 4  3 12 64 512 
Not bad. If we want to add more operations at new levels of precedence, we can insert them into our existing chain of parsing rules. We’ll see how this works in the next section on conditionals.
By the way—in our "expr.rkt" module, macros aren’t mandatory. We’re not doing anything tricky. If we wanted, the macros could be rewritten as functions without affecting program behavior. For instance, we could use definecases (which is like definemacrocases, but for functions) to handle the different calling patterns, and case to choose the function to apply:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26  #lang br (provide bexpr bsum bproduct bneg bexpt) (define (bexpr expr) (if (integer? expr) (inexact>exact expr) expr)) (definecases bsum [(_ arg) arg] [(_ left op right) ((case op [("+") +] [("") ]) left right)]) (definecases bproduct [(_ arg) arg] [(_ left op right) ((case op [("*") *] [("/") (λ (l r) (/ l r 1.0))] [("mod") modulo]) left right)]) (definecases bneg [(_ val) val] [(_ _ val) ( val)]) (definecases bexpt [(_ val) val] [(_ left _ right) (expt left right)]) 
This implementation will work the same way.
Which is better? The rule of thumb is that we should use functions where we can, and macros where we must. Why? Because macros get evaluated at compile time (which has a cost) and they create more runtime code, which in turn has to be compiled and interpreted (which also has a cost). Calling functions at run time is typically more efficient.
But claims about performance always depend on the context. In this case, the competing macros and functions are both tiny. Some casual benchmarking with a BASIC test program suggests they’re essentially equal in speed. So we’ll stick with the macro solution. But the alternative exists.