Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Macros: Add a concept of "macro libraries" #1831

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
61 changes: 61 additions & 0 deletions working/macros/feature-specification.md
Original file line number Diff line number Diff line change
Expand Up @@ -559,6 +559,67 @@ though it violates the normal rules.
We could instead only allow adding macros from _later_ phases, but that would
restrict the functionality in unnecessary ways.

## Macro Libraries and Tree Shaking

Macros, and any dependencies of macros, should be guaranteed to be tree shaken
in their entirety from the final program. At runtime, the macro definition
libraries should not be available, nor any stubs of them. The imports should
be entirely removed.

In order to support this, we introduce a new type of library - a "macro"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have evidence that this feature is actually needed? Do we know that we need to make users explicitly decide which libraries should be macro libraries and which shouldn't (and also lose the ability to test their macros!) just to ensure that the compiler does what it should be able to do automatically?

This feels like a lot of user effort for almost no value.

If we do want to go in this direction, then I think we shouldn't have macro libraries, we should have macro imports. Whether some code is compiled into the final program is a property of how it's used, not how it's declared. We could say that you can mark any import as (maybe) static:

static import 'package:macro_stuff/macro_stuff.dart';

The behavior then is:

  • It is a compile-time error to refer to any identifier imported from a static import outside of a macro application expression.

This would let you then test your macros by having the test code import the macro normally.

Copy link
Contributor Author

@jakemac53 jakemac53 Sep 9, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is another approach that I considered for sure. The problem with it is that people are going to create libraries that contain both macros and other normal code. But some people won't use the macro at all. Then, there will exist in the program some pattern that defeats the tree shaking of the macro, and we end up with portions of the macro libraries and macros themselves shipped with real apps.

If we believe fully in tree shaking, then we don't have to do anything at all (including removing the import, it will all be tree shaken anyways!).

The mechanism I proposed here should be 100% foolproof, and won't have the same pitfalls. It also doesn't require most users to even know these types of libraries exist - only the authors of the libraries have to know.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm worried about intermingling macro code and run-time code too, so much that I don't like importing the code of the macros at all.

Consider instead:

  • We have dart:macros which defines an abstract API and some base annotation classes.
  • An annotation can extend those and specify a macro implementation by URL along with data to pass to that code.
  • The macro is a script which gets an actual implementation of the macro API as second argument to main. We can define any API we want there, including allowing the script to stay-alive and handle more macro expansion requests, or just shut down when it's done.
  • The macros are then defined in macro-libraries, which are perfectly normal libraries with a main method taking a MacroContext as second argument. You can run them with mock macro implementations for testing.

Then my package:data_classes/data_classes.dart would expose a public annotation DataClass(...data...) which extends MacroAnnotation("package:data_classes/generator.dart", ...data...). The macro processor will see that, spawn that script (along with an implementation of the macros API, maybe fully fledged or maybe a shell which communicates with the real processor using ports), and will give a MacroContext to it containing data and anything it needs. It generates code and inspects existing code by interacting with the context.

But no library is a special kind of library. It's all which APIs it has access to, and who will invoke it.
And it keeps the macro implementation at arms length from the code which uses it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It feels to me like that approach starts tying our hands to a specific implementation strategy, and it also opens up a fair bit of extra api without much benefit over this proposal? It does open up a strategy for testing, but does it solve much else?

The MacroAnnotation class and other apis exposed through dart:macros also leak into the program still in that case.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes...let's PLEASE be careful here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see how importing a library with macro builders and not using them is different from importing some other library and not using some of its parts. In both cases the result is that your compiled application is bigger than it could be. In both cases the solution is not avoid these (which?) patterns that don't allow tree shaking do its work.

Copy link
Contributor Author

@jakemac53 jakemac53 Sep 13, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The difference is we can make a clear distinction here about code that we know is only useful compile time (excluding the test use case, though). That isn't true of other code, so it is different.

Also, I believe that the set of dependencies used by macro code and runtime code will be very different. Consider the set of dev_dependencies that are typically added in an app that uses build_runner today. This quite often is many more packages than are actually used at runtime (especially when you consider the transitive set).

It is true we don't have to take this approach. But I do think it would be a mistake not to enforce that these build time dependencies get removed at runtime.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we don't want to make a separate type of library to enforce this, then I would probably want to go with something like the proposal @lrhn suggested.

One advantage of that is it could simplify some of the questions around arguments to macros. We could treat it more as just serializable data, and that has its appeals for sure.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the question here then is (correct me if I'm wrong): Can a macro library be a dev_dependency, or because of the annotations do they have to be a regular dependency? That all depends on how you specify whether macro annotations will be in the final code. Personally I look at it from the perspective that annotations are all for static analysis / compile-time usage only and should all be removed from final code. Because of dart:mirrors I understand that isn't the case currently, but if macros are supposed to replace dart:mirrors, I would assume that you would be able to specify that this removes annotation reflection along with the new language version, and all annotations / annotation libraries can be dev_dependencies and are removed from final generated code.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Macros have to be regular dependencies - unless we decided packages should ship with macros already ran - but I don't think we want to do that.

But we do still want all the macro dependencies to be eliminated from the final application.

library. You can mark your library as a macro library through a `macro`
modifier on the library directive: `macro library my_macro;`.

- Macro libraries can import and freely use code from other macro libraries
or normal libraries as they choose.
- If a normal library imports a macro library, it can only use symbols from
that library within macro application code.
- Once macros are expanded, all macro applications and imports of macro
libraries are removed from the program.
- Regular libraries _are_ allowed to export macro libraries (for convenience),
and that _does not_ make them a macro library. The _export_ is removed from
the final program, and its symbols are still only visible within macro
applications when the original library is imported.
- The library containing the Macro interfaces and APIs is itself a macro
library. This makes it only useful to import from other macro libraries, and
ensures its APIs never leak unexpectedly into runtime apps.

In this way, macros (and their imports) have no effect on the final program
beyond the code that they generate. It also draws a clear distinction between
macro libraries and normal libraries, without most users having to understand
the actual distinction between them (they appear similar from the outside).

### Testing Macros

While it may be desirable to write functional unit tests of macros, this
approach does not generally allow for that, since macro code is not available
at runtime.

Instead users are expected to write integration style tests, where they use
their macro directly in their test and then test the behavior of the resulting
generated code.

TODO: Can we come up with a better testing pattern? Possibly something where the
tests are actually ran at compile time, as a macro?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure what you mean by compile-time. Do you mean the time at which the macro is being loaded into the VM, which presumably happens when loading code that uses the macro?

Would this include the analyzer's execution of macros?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Essentially I was thinking you could maybe run a "test" where the entire test is actually just a macro (so the macro apis are available). This would mean the analyzer would also run these just like any other macros. And so your "tests" would actually run as a part of the analyzer, which might be kind of cool actually.

But I would not be asking the analyzer to implement a testing framework or anything like that, just execute macros normally.


TODO: Can we expose some sort of golden tests functionality to match the
expected output?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does "expected output" mean the content of the file that would be generated by the analyzer, VM, or any other tool that requires a textual representation of the generated code?

Does the expected output include the imports of macro libraries and macro applications?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general I made these TODOs because I don't have the answers to these questions. Just future avenues to explore.

I don't have any specific ideas on how this could/should work right now, I just know people sometimes like to use golden files for testing codegen output today.


### Modular compilation

During modular compilation, macro libraries should be compiled as their own
module. These modules should never be provided to compiler backends, and are
only used during kernel compilation.

When creating a merged dill file, you should simply omit the macro library
modules entirely from the concatenated dill. The same goes for dill manifest
files, they should be omitted from the manifest.

### Macros and `dart:mirrors`

Since macros are removed entirely after compilation, we do not allow reflecting
on them with `dart:mirrors`.

## Scoping

### Resolved identifiers
Expand Down