Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rhombus expansion and enforestation on shrubberies #162

Open
wants to merge 16 commits into
base: master
Choose a base branch
from

Conversation

mflatt
Copy link
Member

@mflatt mflatt commented Jul 14, 2021

Rendered

This proposal builds on #122, defining a macro-expansion layer suitable for shrubberies.

In other words, it still doesn't define a language like #lang rhombus, but it defines an expansion and enforestation layer that toward that goal. It's analogous to Racket's core expander, but also defined in terms of Racket's expander.

The implementation in the proposal is currently the same as the https://github.com/mflatt/shrubbery-rhombus-0 package that makes #lang shubbery run in Racket and DrRacket (with just a few operators and a definition form).

@jeapostrophe
Copy link
Collaborator

Big comments:

The four syntactic categories are not really defended. You basically say, "Racket has three, Rhombus adds one more". If someone has never heard of Racket, how would we explain these things?

What even is a "syntactic category"? I think something like, "Based on the surrounding context, this shrubbery blob is put into one of these categories and it is the choice of the context NOT of the blob". In a naive LISP, there's just two categories because e := atom | (e . e) and there's no extension for atoms; although Racket is not naive like this.

If that answer is correct, then I think this document should say something about how we know what syntactic category a particular blob is in. I think your API is like this, because it says that rhombus-top is the interface to specifying a sequence of declarations, definitions, and expressions... but it doesn't actually say how we know that a particular blob is any one of those. Am I meant to look at parse.rkt to see how that function decides? I think we need a "guide" explanation of how to know.

I think that explanation should also justify why it is this particular set of four things.... maybe:

  • declarations --- Things at the top of a module are special... Why? I think I know the answer is that they might be something that the Racket macro expander has to look at first to discover more macros. But, what is an explanation for these things being special independent of the Racket macro expander? Perhaps, "declarations are part of a module, which they can influence by introducing dependencies"? If something isn't explicitly a declaration, then most declaration-consumers will take a definition?

  • definitions --- A definition is part of a "scope" which it can influence by introducing bindings. If something isn't explicitly a expression, then most definition-consumers will take an expression?

  • expressions --- An expression cannot influence anything syntactically (except through procedural-level operations like syntax-local-lift-declaration) so it can only expand to a value expression.

These feel quite natural and general. However, patterns feel very specific:

  • patterns --- Most binding positions will use a matching algorithm that receives a value, checks if it is valid, then defines (syntax and value) bindings based on features of that value, such as the two components of a cons cell.

That feels very particular to one "language". In other words, all of these categories are specifying an "interface" --- what they receive and what they return --- where what they receive is syntax with a promise about where it occurs and what they return is the "influence" or "effect" they can have on their context. The categories are roughly defined by the effect they can have: declarations do module-effects, like imports and submodules; definitions do binding-effects; expressions have no effects. Your "patterns" have a constraint-effect (the matcher function) and a binding-effect, where the first effect is "outward" in that it communicates to the pattern match "Don't select me" and the second effect is "inward" in that it influences a "sibling" based on the particular syntax of the matcher. Perhaps these outward/inward effects could be expressed more generally:

  • bindings ---A binding position is a core concept that occurs in many declarations, definitions, and expressions and it can expand to a pair of syntaxes: one which is an outward expression and the second which is an inward definition.

A "match transformer" might be

(define-syntax cons
 (singleton-struct .... #:prop binding-transformer cons-bt)))
(define-syntax (cons-bt stx)
 (syntax-parse
  [(_ carb:binding cdrb:binding)
   (cons
    #'(lambda (x) (and (cons? x) (carb.out (car x)) (cdrb.out (cdr x))))
    #'(begin
         (splicing-syntax-parameterize ([current-match-value (car (current-match-value))]) (carb.in)
         (splicing-syntax-parameterize ([current-match-value (cdr (current-match-value))]) (cdrb.in)))]))

This, of course, "knows" that it is patch of match, which is why it knows that the out effect is expected to be a procedure and the in effect is expected to look at current-match-value

I am particularly concerned about how this idea of binding patterns could, for instance, be used for non-value work, like in type declarations; consider this Haskell:

myFunction :: forall a. Ord a => [a] -> [(a, a)]

Perhaps we could write

myFunction :: forall (a <: Ord) . [a] -> [(a, a)]

to use an bounded quantification style with a binding pattern. In this case the <: operator would need to do something like

(cond
 [(syntax-am-i-doing-type-expansion?)
  (cons #'(constraint) #'(expose-type-class-members-of constraint CALLER-FILL-ME-IN))]
 [(syntax-or-is-it-pattern-matching?)
  ....])

Small comments:

I believe that this sentence --- A potential advantage of non-transitive precedence avoiding an order among operands that have make no sense next to each other. --- has a typo, because I can't understand it.

If two operators both claim a precedence relationship to each other, the relationship must be consistent; --- What is the consequence of violation of this "must"? Enforestation is undefined? It's a compile-time error?

Big shed --- I feel like the (cons/c (or/c identifier? 'default) '(stronger same weaker)) interface is verbose and think (list/c (listof identifer?) x3 (or/c stronger same weaker)) where the sets are written out with one value for default is better. If you don't agree, you should write down the error rules for when something appears twice.

Along similar lines, the Rhombus expander supports a certain style of infix and prefix operators, but it does not directly support all possible kinds of operators. --- I think you should explicitly name some desirable operators you know you won't support.

I think that :: as a declaration operator is very desirable

@mflatt
Copy link
Member Author

mflatt commented Jul 15, 2021

@jeapostrophe - thanks for the comments.

"Binding" is a better word than "pattern", so I've switched to using that word. Where "binding" was previously used for the define-syntax sense of mapping an operator name to an operator implementation, the proposal now uses the word "mapping".

You're right that the category of a shrubbery for expansion is determined by its context, and I've updated the description to say that. I've also updated to clarify that the four categories are just the ones directly supported by the expander, while a language built on the expander can have even more categories. The rationale now starts with a paragraph justifying the four categories (which is simple: experience with Racket).

I'm not sure I understand your type-declaration example. I would expect a typed language to have an additional syntactic category for types, and the rationale now notes that possibility. I would hope that the new category is supported through a new kind of compile-time value, and not a compile-time function that an expander calls to determine there category where it's being used.

I take your "If someone has never heard of Racket, how would we explain these things?" comment as being primarily about how to justify the four syntactic categories. The comment could also suggest that the proposal is gibberish to someone who has never heard of Racket, and I would agree. If and when a Rhombus language built on this concepts exists, then it will be possible to explain everything in those terms. Meanwhile, this proposal bootstraps by using Racket for general concepts and to make the API concrete.

with the usual precedence. Unlike the other operators, the `.`
operator's right-hand side is not an expression; it must always be an
identifier.
meant to know about `::` specifically; the `::` meant to be a binding
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is

remains between shrubbery notation and Racket's macro expander,
because shrubbery notation is intended to be used with less grouping
than is normally present in S-expressions.
syntax-object form, so it can include scopes to determine a mappin for
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mapping

Rhombus expander will dispatch on operator binding only during the
The relevant syntactic category for a shrubbery is determined by its
surrounding forms, and not inherent to the shrubbery. For example,
`cons(x, y)` might mean one thing as an expression and aother as a
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

another

surrounding forms, and not inherent to the shrubbery. For example,
`cons(x, y)` might mean one thing as an expression and aother as a
binding. Exactly where the contexts reside in a module depends on a
specific Rhambus language that is built on the Rhombus expander, so
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rhombus

binding. Exactly where the contexts reside in a module depends on a
specific Rhambus language that is built on the Rhombus expander, so
it's difficult to say more here. Meanwhile, a full Rhambus language
may have more syntactic categories than the oes directly supported by
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ones

@jeapostrophe
Copy link
Collaborator

I've made some copy-editing comments inline.

I like your additions.

"The four categories for the Rhombus expander are merely the ones that are directly supported by the expander and its API." --- You say this elsewhere, and below, but I think it is worth talking about, in some way, the idea that Rhombus will be "syntactic category heavy" while Racket is "syntactic category light". What I mean by that is that the mores of Racket macros are not to make new categories, in part because the language & standard library doesn't, and what we're trying to do in Rhombus is (a) demonstrate how that is useful and (b) make it is easy to do. I think that your list of example new categories might be fruitfully expanded based on your imagination and common examples: database query contexts, Web route contexts, import specification, export specifications, and so on.

I take your "If someone has never heard of Racket, how would we explain these things?" comment as being primarily about how to justify the four syntactic categories.

I think that I mean that the proposal doesn't try to explain what a "declaration" vs "definition" vs "expression" is. I think that a casual observer of the LISP world would say that there are only two categories---expression and binding---and then if you pressed them, they'd probably admit that definitions are a thing, but my guess is that no one would come up with "declaration". I don't think that I could give a convincing explanation of what a "declaration" in this context or in the Racket context is.

I think that writing that explanation would lead to something like "Of course it's obvious all languages have these four things: the top of a compilation unit, a definition context, an expression, and a binding position. That's why those four are built-in to this discussion of Rhombus expansion, because they'll always be there. We're not describing the ceiling of syntactic categories... we're describing the floor, and we're demonstrating how a Rhombus-based language designer should think about their job... just like a Racket-based language designer thinks in a different way, such as by using conventions (like a leading define-) to indicate macros that produce definitions, when they design. The Rhombus-way is to use NEW categories with unique interfaces and bindings that behave differently in different contexts."

I'm not sure I understand your type-declaration example. I would expect a typed language to have an additional syntactic category for types, and the rationale now notes that possibility. I would hope that the new category is supported through a new kind of compile-time value, and not a compile-time function that an expander calls to determine there category where it's being used.

My point with that example is that it binds a in the forall, so it is a binding position. But I think that it makes sense for a typed language to have a type-binding position as a new category that doesn't necessarily correspond to a Rhombus value-binding position.

Include a more substantial prototype language, which helps for writing
examples. But at the same time, the expansion engine that's the
subject of the proposal is smaller and cleanly separated (in the
"enforest" directory/collection) from its us in the prototype.
@mflatt
Copy link
Member Author

mflatt commented Jul 30, 2021

New draft pushed.

Experimenting with a Rhombus prototype helped clarify which pieces belong in this proposal and which details are "a language built with the Rhombus expander". The resulting proposal is more abstract, in the sense that it makes fewer assumptions. But it's more concrete in that the implementation part that belongs to this proposal is cleanly separated out, and it's explained with a lot more examples from the #lang rhombus experiment prototype.

I'll write up more about the prototype soon, maybe as a new PR.

@jeapostrophe
Copy link
Collaborator

"The invocation of a transformer for an implicit operator does not include a token for the implicit operator, unlike other transformer invocations." --- Am I correct that this is different than Racket? Why the change? I feel like it can be nice for #%app to show up so I can use the same implementation for it and really-cool-app.

I notice that you seem to be going towards _ style because the shish-kebab style doesn't work anymore. People are going to bikeshed and argue forever about that versus camel case. You could make _ an operator and cut off the disagreement from the very beginning :)

Typo: "repersenting", "shribbery"

@mflatt
Copy link
Member Author

mflatt commented Jul 30, 2021

@jeapostrophe You're right that Racket passes along a synthesized identifier to implicit implementations, and that has worked fine, so it's probably better to keep that behavior here. Changed.

Also, sync the implementation with the current prototype.
@samth
Copy link
Sponsor Member

samth commented Aug 28, 2021

One feature of several other languages that has never worked that well in Racket is a sort of combining of multiple top-level forms together. Haskell is a good example of this:

not :: Boolean -> Boolean
not True = False
not False  = True

Here we have three conceptually-separate forms but they all get grouped into the same definition.

Another example, featuring a somewhat more regular syntax, is decorators in Python or annotations in Rust:

@foo
def f(x): return x
#[derive(Serialize, Deserialize, Debug)]
struct Point {
    x: i32,
    y: i32,
}

Typed Racket implements something like this for type annotations:

(: f (-> Integer Integer))
(define (f x) x)

but that works via mutation and coordination using #%module-begin.

Is it possible to use Rhombus macros to combine forms this way? It feels like the following should definitely be possible:

ann f : Integer -> Integer
def f(x): x

Where the ann macro gets the remaining things to be enforested as unparsed shrubberies. But right now, if you define ann as an expr.macro then you don't get the def ... as part of the tail to be enforested. (I also tried definition_macro and declaration_macro but they didn't seem to work at all.) This makes sense when you look at the shrubbery that syntax produces, but it seems like passing the whole rest of the block would allow for something like this.

In the current implementation, at least as described in this document, that seems like it would involve passing all the remaining forms to definition and declaration macros in the rhombus-top trampoline, rather than just the immediate form that they're a part of. That would probably be somewhat harder to write in the default case, but the default behavior could be to just return them unparsed, similar to the expression-macro protocol.

@mflatt
Copy link
Member Author

mflatt commented Aug 28, 2021

You're right that this sort of grouping is not specifically handled and would need some cooperation from an enclosing form (such as the module top-level and block forms). There's a precedent for cooperation from enclosing definition forms in #163's support for let as a kind of define*. You're also right that shrubbery notation works against the idea of having an operator that absorbs following groups, since one of the goals of shrubbery notation was to constrain the reach of a macro to its group.

A possibility for some things, which fits more directly into the expansion framework here, is to use different binding spaces. For example, in with something like

ann not :: Boolean -> Boolean
def
 | not(True): False
 | not(False): True

then ann not could bind not in a type space while def binds not in the default space. Macros could go between spaces by adding or removing the space's scope from an identifier. (This layer of spaces and scope manipulation is not yet exposed in #163.)

The biding-space approach doesn't help with the things that look like annotations, though, where the way encouraged by shrubbery notation is to have a new form with the definition in a block:

foo:
  def f(x): return x

If avoiding this kind of nesting is an important enough goal, then it might suggest a different surface-syntax approach instead of the shrubbery approach.

@sorawee
Copy link
Contributor

sorawee commented Aug 28, 2021

Would it be possible to eliminate the apparent cooperation from an enclosing form by using interposition point form and/or reader macro? (this is what #85 is proposing).

@rocketnia
Copy link

rocketnia commented Aug 29, 2021

I think giving compound structure to declarations at the top level of a file has a lot in common with parsing. Instead of a sequence of tokens, it's a sequence of declarations. So, for a language with an enforestation pass to process custom infix operations, I wouldn't be at all surprised to see a similar kind of enforestation pass for declarations.

I think there are some different design pressures for it, though. Function declarations, which would probably be the single most numerous kind of declaration, are capable of mutual recursion and can be freely rearranged with respect to each other. This makes it almost odd when some declarations have a more ordered relationship.

  • Delimiters
    • Declarations that geometrically cordon off some region of other declarations to dictate their meaning, like define* (Make an RFC for supporting define* in internal-definition context #46).
    • Section headings. I don't know languages that make much explicit use of these, but some imperative namespacing systems effectively have section heading commands per-module (ns in Clojure, context in newLISP), and Inform 7 uses them to allow whole chapters of a program to be commented out. They seem Markdown-like, and I think they're a potentially compelling approach to cordoning off parts of a set of declarations so define* doesn't clobber the entire set.
  • Accidental dependency orders
    • Expansion-time dependencies, which can impose a partial order between macro definitions and call sites, and which have to be somewhat consistent with syntactic order in languages like #lang racket.
    • Stratified load-time dependencies, which impose a partial order between initializations and use sites, and which have to be somewhat consistent with syntactic order in a language like #lang racket.
  • Ordered whole-program composition
    • Imperative load-time statements.
    • Cascading rules, like CSS, pattern match clauses, or multimethods in some systems, where the order they're attempted in corresponds somehow with their syntactic order. (Pattern-matching usually tries from first to last. CSS tries from last to first. Multimethod systems can go either way depending on whether we think of them as a fancy pattern-matching system or a fancy default-overriding system.)
  • Definitions by parts
    • Closed-world definitions-by-parts that don't quite rise to being openly extensible, and where being able to spot exhaustiveness is important, such that it makes sense to enforce that they all appear within one small neighborhood. A few things may be allowed in between anyway.
  • General extensions to the syntax of a declaration
    • Annotations, like the Rust annotations and Python decorators @samth mentioned.
    • Simple comments that annotate things.
    • Documentation comments.
    • Annotations of tags and names that other parts of a codebase can hook into, e.g. for hyperlinks between comments or applying certain linting policies.

I think these can basically break down into phases: Delimiters and annotations cooperate with a delimiter-macroexpander to produce a structured document of sections made up of annotated section headings and annotated declarations. (Along the way, the delimiter-macroexpander can discover definitions of macros that extend its behavior.)

Now that the delimiter-macroexpansion is out of the way, we section-macroexpand according to policies present on the annotated section headings. A section-macro in turn typically expands a section body by first parsing out contiguous "definitions by parts" groupings, then running several topic-specific macroexpanders that take turns processing the "ordered whole-program composition" declarations they're interested in.

After that, each section has been broken down into an orderless collection of ordered collections of declarations (some individual declarations; some definitions by parts; some ordered whole-program composition systems that are independent of each other). I guess I'll call them subfeeds. Each one can then be subfeed-macroexpanded in its own way

Breaking things down into hierarchical sections and independent subfeeds of declarations makes this syntax more concurrent; in fact, some of the passes can logically begin processing before the others have finished. I find this valuable because it should help with reporting multiple errors in independent parts of the file and should help with caching of compilation results.

This describes just the macroexpansion of the top level of a file, but I think most nested blocks could be macroexpanded in roughly the same way (with variations that have to do with specific applications, like some blocks being Scribble documentation and such). Having the outside edge of a file be macroexpanded before the inner parts would be great for letting IDE tooling help out even when the inner parts have errors in them.

That's just one idea, anyway. :) I've been sitting on this combination of outside-in parsing and Markdown-like section headers for a while, and I thought about making it a proposal at some point, but I'm not sure I have the time.

@mflatt
Copy link
Member Author

mflatt commented Aug 29, 2021

@sorawee If I understand what you mean, #%block in the #163 prototype can serve that role. Currently, some primitive forms expand to rhombus-block instead of going through #%block, but almost certainly they should go through #%block, and it would be straightforward in the cases that I checked.

@rocketnia That's an interesting line of thought. While I'd be wary of building in too much complexity, it does seem like there could be improved cooperation from the default #%module-begin and #%block to make those kinds of things compose better — something more than the support that the core #%module-begin provides through things like syntax-local-lift-module-end-declaration (which is used to implement module+).

@samth
Copy link
Sponsor Member

samth commented Aug 30, 2021

One question I had while reading the description of the enforestation API/implementation was whether the building blocks have Rhombus-level API, or just Racket-level API. That is, the API described here is "three Racket structs and one macro". Is the intent that implementation on top of Racket is an exposed feature of the macro system (the same way that Typed Racket's design presumes that Racket is a part of what you know about) or is the eventual goal to have a pure-Rhombus explanation of the macro system (the way that Chez Scheme is basically just an implementation detail of Racket itself)?

@mflatt
Copy link
Member Author

mflatt commented Aug 30, 2021

The intent is to have a Rhombus-level API, and everything would be presented and explained in those terms (so, like Chez Scheme relative to Racket).

@mflatt
Copy link
Member Author

mflatt commented Sep 4, 2021

I've updated the "shrubbery-rhombus-0" package with a proof-of-concept defn.sequence_macro form. The module top-level form and the block (definition-context) form recognize a definition-sequence macro binding before trying a definition or expression. The definition-sequence macro receives all the groups in the rest of the block, and it returns the unused portion as a second result — analogous to expression macros, as @rocketnia suggests.

Here's a dumb example, which is a reverse_defns form that swaps the order of the next two groups:

#lang rhombus

defn.sequence_macro ?{reverse_defns; ¿defn1 ...; ¿defn2 ...; ¿tail; ...}:
  values(?{ ¿defn2 ...; ¿defn1 ... }, ?{ ¿tail; ...})

reverse_defns
def x: y+1
def y: 10

x

@rocketnia
Copy link

Looks to me like that's exactly what's needed for define*. :)

I think a lot of the rest of what I described above could be implemented as a library in terms of that. Section heading syntaxes could operate by being sequence macros that took more control over processing the rest of the file. They'd scan for other delimiter-macro definitions, delimiters, and annotations, then proceed to expand the hierarchical result according to a section-macroexpander implementation (and so on) defined within the library. They might leave an "unused portion" after some kind of section-ending delimiter, but I think once a syntax has its beginning and ending both marked, it might as well use brackets (or other block structure).

Speaking of which, I think reverse_defns is one of the places where it would make a little more sense to use actual block structure:

reverse_defns:
  def x: y+1
  def y: 10

x

But it serves its purpose as a toy example, at least. :) A more realistic simple example of leaving behind an unused portion might be an annotation that only consumes a single declaration, perhaps with some other arguments before that, like a documentation block.

@rocketnia
Copy link

On the other hand, an annotation probably wouldn't make a great example either. I think there's quite a bit of subtlety to take into account when trying to consume a specific number of declarations. The very fact that a declaration could be annotated means that it could be made up of more one part at the s-expression/syntax object level, and a well-designed annotation macro should check for that in its input as well.

This is why I have the parsing of delimiters and annotations intermixed with each other in the first part of the macroexpansion strategy I described (...and now I wonder if definitions by parts should join them).

I think if a sequence macro doesn't go a particular effort to parse out annotations and delimiters and such from the declarations that follow it, it should probably treat those declarations as a single, indivisible block. The define* macro would do that, and so would most of the other things I use Parendown for (early exits like @jackfirth's guarded-block, local-to-the-current-block parameterize commands, local-to-the-current-block error handlers, and monadic do-style CPS assistance in general).

Most of the CPS assistance only makes sense in a local block, where treating the rest of the block as a run time continuation can make sense.

At the module level, in most cases, the indivisible block would be spliced into the module to facilitate mutual recursion, using begin or splicing-let or something. In the remaining cases, there's probably a framework involved, like a web server or a game engine, where a file represents some entity that really only has one framework entrypoint, and programmers take to using a sequence macro to avoid indenting it.

To gather my thoughts, I think the most compelling examples of sequence macros break down as follows:

  1. Sequence macros that take over the processing of the rest of the block, so that certain language features can be implemented in libraries.
  2. Sequence macros for use in imperative blocks to help with CPS (even simpler cases of CPS like variable shadowing, early exits, error handling, etc.).
  3. Sequence macros for use in mutually recursive blocks, which splice the remaining declarations into the surrounding definition context using something like splicing-let.
  4. Sequence macros for module-level use to let the programmer write the module's main entrypoint without indentation.

...The more I think about 1, the less I like it. I think this would be better handled at the #%module-begin or #%block level, so that users don't have to wonder whether every macro does its takeover in the same way. In other words, I don't think sequence macros alone would let me contain the complexity of my "delimiter-macroexpansion" approach in a library; it would be more of a language.

The define* sequence macro would probably be an example of 2 or 3, depending on whether or not people expect it to break up mutual recursion.

Number 4 is probably the foremost reason to write a #lang, so while sequence macros could come in handy for it, it's probably not an ecosystem gap that sequence macros are particularly needed for.

@michaelballantyne
Copy link
Contributor

I haven't read every discussion point, so perhaps already answered:

Have you considered an alternative design in which operator declarations specify the contexts of the subexpressions? It seems like it would be nice if the more complex 'macro protocol were not necessary.

@mflatt
Copy link
Member Author

mflatt commented Sep 8, 2021

@michaelballantyne I think I don't understand the question. While I can see how specifying contexts on arguments work work — like using syntax-parse with syntax classes that trigger local expansion — I don't see how that would avoid the 'macro protocol. The 'macro protocol seems needed for all sorts of things, including . (parses the right-hand side as an identifier and expands in a way that depends on that identifier and the left-hand side) or ? (parses its right-hand side as a template, potentially with ¿ escapes) in #163. More generally, the 'macro protocol seems like the main point to me, and the non-'macro protocol is just a shorthand.

The `define-enforest` macro is now parameterized over the implicit
names that it uses.
@mflatt
Copy link
Member Author

mflatt commented Sep 10, 2021

Based on a suggestion from @willghatch, define-enforest is now parameterized over the selection of names of implicit prefix and infix operators. This makes it less tied to shrubbery notation, although the implementation still relies on tagging every form at the S-expression level, and it relies specifically on op wrapping operators. It could be generalized even further, of course.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants