Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[macros] Consider not overloading metadata syntax with macro metadata. #3604

Closed
modulovalue opened this issue Feb 1, 2024 · 33 comments
Closed
Labels
static-metaprogramming Issues related to static metaprogramming

Comments

@modulovalue
Copy link

Currently, the macro proposal reuses metadata syntax (e.g., @foo) for marking declarations as being eligible for macro expansion.

This invalidates an important invariant that users of Dart are used to: metadata can only affect declarations in ways that any other declaration outside of a library can. That is, If we see some metadata on a declaration, we can assume that there's nothing "funky" going on. Third-party tooling might generate some code, but we can assume that there is no surprising behavior that changes the behavior of the annotated declaration.

I think that this is valuable, so I'd like to propose for the macro proposal to introduce a new type of metadata: "macro metadata" with custom syntax (e.g., @!foo, that is, a ! needs to follow @ for metadata to become macro metadata.). Only declarations marked with macro metadata will be eligible for macro expansion.

This allows tooling to skip macro expansion processing on declarations that have no macro metadata, and users can immediately see which declarations can be expected to be affected by macro expansion.

@rrousselGit
Copy link

Agreed.

There's also the added fact that some macro parameters would be allowed that are not possible with current annotations.
From what I remember, we'd be able to do:

@Macro(
  (arg) => 'Hello $arg',
)

but that's not legal with annotations.


I'd personally suggest using #. It's an underused symbol (pun intended)

#data
class Foo {}

We could also tweak the formatting for macros specifically.

I think folks would prefer

#data class Foo {}

@jakemac53
Copy link
Contributor

This has certainly come up in the past, I am not necessarily opposed to using a different syntax, but I do think the motivation needs to be strong, and I do think there are advantages to re-using the metadata syntax:

  • It makes migration from existing generators easier, more familiar
  • I think users would already understand intuitively that code generation is happening with the existing syntax, it matches existing code generation. It is also familiar for users coming from Java for instance
  • We don't have to claim any additional syntax space
  • It allows the potential for packages on older language versions to use macros (but not define them). This is analogous to a package that does not have access to records, being able to use records types that come from dependencies. Only the macro package has to be opted into macros, and anybody can use that macro, regardless of their language version.
    • Note that in practice, this probably has less value than it seems on the surface. The main reason for keeping a language version low is to support users on old SDKs, but your macro dependency will not be available to them anyways, so you won't truly be able to support those users (they won't be able to get a version solve with your new version).

@rrousselGit
Copy link

It allows the potential for packages on older language versions to use macros (but not define them). This is analogous to a package that does not have access to records, being able to use records types that come from dependencies. Only the macro package has to be opted into macros, and anybody can use that macro, regardless of their language version.

Does that really work in practice?

When considering:

@Macro(
  (arg) => 'Hello $arg',
)

older language versions would fail to compile.
Sure, not all macros will use this syntax, but some will.

@jakemac53
Copy link
Contributor

Does that really work in practice?

For this particular feature, since it does require new parsing features, maybe not. Although, technically it probably could, but it would be a bit weird. It would not be possible to use a macro unless your SDK supported macros, so it would for sure have the ability to parse that code. Whether or not we language version the support is a different question. But for instance the Record analogy I made is actually quite similar to this. You can use records, without being opted in to the feature, but you can't actually write out explicitly a record type or literal still.

@lrhn
Copy link
Member

lrhn commented Feb 1, 2024

If we allow function literals inside macro annotations, we should just allow them in any constant expression. There's a request for that, constant function literals, and it would be useful in other places.
In general:

  • allow const to prefix any expression, putting it into a constant context, and
  • allow function literals in a constant context, which can only refer to static names and their own parameters.

We shouldn't have special rules for macro annotations.

@jakemac53
Copy link
Contributor

These aren't actually const functions. What happens when you do this is an implicit conversion to a Code object, resolved within the same scope as the annotation itself. That code object can then be used by the macro as a part of their own code that they generate.

This is somewhat problematic already though, and we probably need to revisit it. Ultimately, we need to be able to emit valid Dart code, and the scoping rules as defined would not only be potentially challenging to implement, they also may not even always be possible to implement, using regular Dart code (because of shadowing).

Also, there has been some general discomfort with the idea of any form of implicit conversion like this. Macros do this quite a lot for their constructor parameters though, it isn't isolated to Code parameters.

@lrhn
Copy link
Member

lrhn commented Feb 2, 2024

If Code object literals are needed, then that's another thing we can add generally. Not sure precisely which use-cases to cover, but since kind of quote/quasi-quote-like syntax for delimited Dart code, evaluating to a Code object, is a possibility.

@mraleph
Copy link
Member

mraleph commented Feb 2, 2024

Not sure precisely which use-cases to cover, but since kind of quote/quasi-quote-like syntax for delimited Dart code, evaluating to a Code object, is a possibility.

Yeah, using functions for this is strange. If we are considering to add that - we should instead just add quasi-quoting. Because it looks quite arbitrary that you can use QQ in macro application, but can't use QQ to actually implement macros.

But that's the topic for #1989

@munificent
Copy link
Member

munificent commented Feb 5, 2024

Another reason in favor of using metadata annotations for macro applications is that macros will often need metadata annotations applied to declarations anyway, to configure how the macro behaves. For example, consider a serialization macro that allows the user declare that some fields shouldn't be serialized:

@serialized
class SomeClass {
  final int x;
  final String y;

  @noSerialize
  final String z;
}

Using metadata annotation syntax for macro applications makes the syntax uniform between applying a macro and configuring a macro. In some cases, it may be an implementation detail as to which annotation is actually the macro and which is the configuration. For example, consider a macro that, I don't, establishes some kind of connection between a Flutter widget class and its corresponding state:

@widget
class SomeWidget extension StatefulWidget {
  ...
}

@widgetState
class SomeWidgetState extends State<SomeWidget> {
  ...
}

From the user's perspective, they don't care if @widget is the macro and @widgetState the metadata or vice versa. The owner of that macro may even want to switch that at some point without breaking users.

Another way to think about it is that in a world where Dart has macros, it's likely that almost all uses of metadata annotations will be for use with macros. And at that point, it probably makes sense to retcon the syntax to mean "static metaprogramming stuff here".

@rrousselGit
Copy link

rrousselGit commented Feb 5, 2024

I don't like that argument.
I personally really hope that in the future, we'll have statement macros. But annotations feel very weird there.

For example, I'd like to be able to do:

Widget build(context) {
  final snapshot = @streamBuilder(stream: stream);
  return Text(snapshot.data);
}

But using the annotation syntax for a macro here feels confusing.


Another way to think about it is that in a world where Dart has macros, it's likely that almost all uses of metadata annotations will be for use with macros

Even with macros, I expect most annotations to be @override and some variants.

I'm personally worried that newcomers would view a@override and think "this does something funky I don't understand".
Some newcomers already think that about the @override, but macros would make it a lot worse IMO

@pattobrien
Copy link

I'm personally worried that newcomers would view a@override and think "this does something funky I don't understand".

IMO that's an issue that has more to do with @override needing to be built into the language, than anything else. (issue)

I'm not sure I agree with the general argument being made. In cases like @visibleForTesting, @mustCallSuper, @deprecated, etc - there may not be any code generation happening, but I think it's pretty clear that something "funky" is going on when they result in strikethrough and warnings appearing in the IDE, and it's generally a sign that communication is happening with the analyzer/compiler.

@lrhn lrhn added the static-metaprogramming Issues related to static metaprogramming label Mar 8, 2024
@lrhn
Copy link
Member

lrhn commented Mar 8, 2024

One thing that does worry me about using the same syntax, is that macros must run before identifiers can be resolved.
Because of that, I'm more worried about the import not being distinct, than about the application.

An import "package:mymacro/mymacro.dart"; does not stand out syntactically compared to other imports like import "package:file/file.dart";.

We cannot conclusively say which names are exported by a library before we have run macros.
We can make heuristic guesses, which is what is enabled by not allowing macros to introduce macro class declarations. That means that you can read the mymacro.dart library files and search for macro class to see which macro classes it exposes, and its transitive export declarations too (you should be able to export a macro class).

But that means we must traverse all transitive dependencies of a library and look for macro class declarations, to see which macros might be available in the current library. (If we find a cycle back to this library, then any macro classes from that import cannot be used, but we don't want to prohibit them, because a macro implementation itself may depend on mutually referencing libraries.)

Then we can look for applications of what looks like macros. (It's only "looks like", we don't yet know if a macro will generate a member with the same name in this library, which would shadow the original annotation. We also don't know that the current library, or non-macro libraries it depends on, are valid yet, and we don't know what they export.)

I'd suggest instead marking the import:

@macro import "package:mymacro/mymacro.dart";

This import is special, it's (assumed, we don't know before we can resolve identifiers) annotated by a constant macro exported by dart:core. That triggers the macro preprocessor, which checks that that library is self-contained (no dependency back to the importing library), which means that it can actually process all the files depended on by mymacro.dart eagerly and figure out exactly which declarations it exposes.

Inside that we have macro class MyMacro ... (I'd make that @macro class instead, no need for new syntax), which we can recognize as a macro implementation.

Then the current library can check for annotations of the form @MyMacro(...). If it finds some (some restrictions on allowed arguments), it triggers compilation of mymacro.dart in a new (or maybe reused) isolate, and creation of a MyMacro(...) value with a Code object or list of Code objects as argument.
Then the current macro generation process begins, like today, which updates the original library by adding an library augmentation import (probably should be allowed to add normal imports too) and removing the @MyMacro(...) annotation and @macro import ....

Then the current library is complete, and we can start checking what it means. (We can make it a compile-time error if the @macro wouldn't have referenced the declaration exported by dart:core, or the @MyMacro wouldn't have resolved to the declaration from mymacro.dart, before removing them, but since the arguments to @MyMacro("banana") wouldn't necessarily be a valid constructor invocation, so it should be removed before Dart compilation starts.

It does mean that we are optimistically resolving @macro and @MyMacro to potential targets, before we know what names are actually imported by other imports, which means before we can resolve identifiers that are not declared locally.
That's unavoidable if we don't introduce new syntax.

We could do that:

@import "...."

@@MyMacro(args)

This syntax makes the import be a macro import, fully processed before any other part of the file, and cannot depend back on this file, and @@MyMacro(...) is a macro application, not a normal annotation, processed against only the names from @imports, before we know what other imports might introduce.
Or something like that.

@jakemac53
Copy link
Contributor

cc @davidmorgan

This is related to recent discussions about build time vs run time deps etc as well.

If we mark macro imports specially, we could also potentially say that the things from those imports are only available at compile time. And also possibly add some extra rules to make the assumptions about these identifiers safe (such as saying you can't shadow declarations from macro imports).

And if we get reflected imports (which I think we do need, or at least an equivalent of those), then we could also specialize those and say they are added to the runtime dependencies of any library that imports the macro.

That is all a bit hand-wavy to be sure, but I think might be an interesting idea to flesh out?

@davidmorgan
Copy link
Contributor

Thanks Lasse, thanks Jake.

I am starting to think we will need to pull considerable metadata related to macros out of the dart files and into pubspecs.

If Dart only lets you discover what needs doing during the compile then that severely limits the possible parallelization across processes/machines.

Pretty much what I would like is that you can turn the set of pubspecs (-->package configs, really) for your program into a set of build steps--command lines--to run that have all macro compiles and macro invocations already accounted for.

If we can't do that fully, we need to understand the difference and how expensive it could be for build performance.

@jakemac53
Copy link
Contributor

If Dart only lets you discover what needs doing during the compile then that severely limits the possible parallelization across processes/machines.

I would say that the current spec is actually very highly parallelizable, without any configuration. You can discover macros via parsing only, which is a key part of this. Parsing itself is highly parallelizable, so macro discovery is as well. Parsing also gives you a dependency graph, etc, so you can order your macro compilation and application steps early still.

For me what pubspec configuration would primarily provide is the ability to share compilation of macros across tools, which would have nice benefits, but I don't believe it is a necessity, we already duplicate all other compilation work across all tools today.

@lrhn
Copy link
Member

lrhn commented Mar 12, 2024

I'd really prefer if you don't need to do anything special in the pubspec.yaml file.
That even includes having a new kind of dependency for macro-specific stuff.

The dev-dependencies are only needed to run things outside of lib/.
Files inside lib depends only on the primary dependencies, which means that a package depending on another package for imports, only needs its primary dependencies.
It also does need those during compilation. Maybe only during compilation, but that's true for all source files.

What's special about macros may be that they are needed during compilation, but their code is not linked into the resulting program. And that's where we need to be clear about separation.

A macro import+application happens before other imports are considered, and are not included in the resulting program.
We should not assume that some declarations of such an import can be used at runtime, because that's going to be incredibly hard separate. It's better to say that a macro import, like a deferred import, cannot introduce any types or constants, and that any access to a member imported only through a macro import, other than in a macro annotation, is a compile-time error.
A macro import should be considered a program which can be compiled separately, with the imported library as entry point. It's entirely separate from the importing library, and the import can be removed after the macro has been applied, along with the macro application annotations. The resulting program should still be valid then.

As Jake says, if we can detect by parsing that something is a macro import, we know that it needs to be treated as such.
(That's why I want the import to be marked, because then we don't need to actually load and parse and try to compile the library, in order to figure out whether it exports a macro class declaration. We just assume that to be true, compile the macro import library, and then let the macro applications be checked against what's actually exported. If it works, it works. No need to require macro class declaration to exist at any place, just check that if the importing library has a @MyMacro() annotation which would resolve to a name exported by the macro library, then the macro library's declaration is valid as an annotation, either a macro application directly, or another annotation that will be consumed by the macro.)

If a macro library also needs some names to be available at runtime, it needs to inject an import into the library. Or have a way for an augmentation file to export another library into the parent file, not just out of the library.

(What if an augmentation file had two kinds of imports: Local imports which are not shared outside of the file, and global imports which are library-wide? We can make either the default, as long as we can write the other.)

@davidmorgan
Copy link
Contributor

Thanks. Yes, "macro import" would achieve the same separation of deps, and if it's the only way to get the separation, I'll take it ;)

But then there is extra work around the UX for a new type of import: a user has to know that they import macros in a special way, but not annotations or other const expressions--why? They are unlikely to want to care. The macro declaration is a natural place to put utility methods intended to be used at runtime with the macro: so now the user has to know to "import macro" and "import" the same file...?

I'm pretty convinced macro implementations don't belong in the same file as the user-facing API. Separating them out is a better user experience: it hides the internals, and the user-facing import can add those runtime utils in a non-confusing way. It fixes the imports being mixed up, and it turns the implementation into normal Dart code. The closer macros are to normal Dart code the more standard tools e.g. test libraries can just be directly used.

If not pubspec.yaml (actually: package config generated from pubspec.yaml) the metadata could go in annotations. But what we should plan to have either way is a tool that processes a Dart codebase quickly and outputs the metadata you need if you're going to build (or analyze!) with macros in a modular way. As long as that processing (not: "analysis") is fast and absolutely never launches macros, I'm happy ;)

Incidentally, I though of another type of metadata we could want: restrictions on what the macro can do. e.g. if a macro declares "I will not introspect outside the library I'm applied to", that's good to know ahead of time. We could determine that during compilation, but that's too late, we want to know ahead of time so we can simplify the build graph (caching+invalidation). Then we enforce the restriction during the build.

Re: pulling in additional imports, pulling in imports dynamically as a result of macro execution is a problem: bazel for example can't support that at all, by design, the build graph has to be knowable before any build actions. So we need a way to compute the max possible import set ahead of time. It's okay if it the macro can then decide not to use it and we e.g. tree shake later.

@jakemac53
Copy link
Contributor

I'm pretty convinced macro implementations don't belong in the same file as the user-facing API. Separating them out is a better user experience: it hides the internals, and the user-facing import can add those runtime utils in a non-confusing way. It fixes the imports being mixed up, and it turns the implementation into normal Dart code. The closer macros are to normal Dart code the more standard tools e.g. test libraries can just be directly used.

I do think the current design is a pretty optimal user experience. You don't have to think about macros differently from normal code, you only need the one import which can provide both the macro and its runtime deps, and clicking on a macro navigates you directly to its implementation which is imo a benefit not a downside.

It does have some implications for builds, and the potential of compile-time deps "leaking" into the runtime deps of a program.

Incidentally, I though of another type of metadata we could want: restrictions on what the macro can do. e.g. if a macro declares "I will not introspect outside the library I'm applied to", that's good to know ahead of time. We could determine that during compilation, but that's too late, we want to know ahead of time so we can simplify the build graph (caching+invalidation). Then we enforce the restriction during the build.

This is only relevant I think for something like bazel - where the deps all have to be known and provided ahead of time. So I think it is also appropriate for this to be a bazel specific optimization, not something that leaks into external Dart apps.

But what we should plan to have either way is a tool that processes a Dart codebase quickly and outputs the metadata you need if you're going to build (or analyze!) with macros in a modular way.

Many modular build systems probably do want static configuration, at least as an optimization. But this can (and likely should) be bespoke (ie: dart_macro_library for internal bazel rules).

For build_runner, we already have the infrastructure to parse the world and collect a bunch of information quickly in the way you describe (not actually in parallel today, though it could be), the build_modules package does this. So for macros we would likely want to add to the information collected per file (does this define macros, what are their names, and names of their constructors etc). Then we would consume that information to generate the precompiled macro programs, which are then passed to the compilers.

@davidmorgan
Copy link
Contributor

Thanks Jake.

Re: "only relevant for something like bazel"--I can imagine macros declaring they will play nice to be useful for the analyzer+CFE, too. But all that we can build as needed / later.

Yes, bespoke modular builds will need bespoke work--I think all we should provide from our side is a nice tool / library to get the information; it'd be nice if folks don't have to reinvent the parse+extract part :) ... and even nicer if they can just directly pass whatever the tool spits out to our other tools.

@lrhn
Copy link
Member

lrhn commented Mar 17, 2024

you only need the one import which can provide both the macro and its runtime deps

That's the part I cannot believe.

The entire import containing the macro class, and all its dependencies, can (and must) be compiled as a separate program. That's fine, if we can recognize what library that is. (Not allowing a macro class to be exported through another library would be an issue.)

But if that library is also imported normally into the macro application library, then every dependency used by the macro implementation, including package:macro, becomes a runtime dependency of the application library.
Maybe tree-shaking can do something, but quite likely it can't figure out everything, since some parts of the implementation library are used at runtime.

I think a stronger separation of phases would be better. Maybe that can be in the macro implementation library instead of the macro application library, but just putting macro before class won't be enough, the class is not the entire implementation.

@jakemac53
Copy link
Contributor

Fwiw, I have agreed from the start that I don't like the mixing of runtime/compile time deps here. I just couldn't get alignment on a fix that keeps the user experience I want, and in particular that makes testing reasonable..

Macros are not fundamentally different from any other annotation today in terms of this though. Those have exactly the same issue. It should be trivial to tree shake classes which are only referenced via annotations, and their associated deps. But, it does make me uncomfortable to rely on that, and it also implies more compile time work to actually achieve that.

@natebosch
Copy link
Member

natebosch commented Mar 20, 2024

I'd suggest instead marking the import:

@macro import "package:mymacro/mymacro.dart";

Would a general purpose static import be sufficient, or does it need to be specific to macros?

#1360

The analyzer team is also shipping a separate mechanism for importing things specifically to reference them in docs. Could a language level static import solve that use case?

cc @bwilkerson

@davidmorgan
Copy link
Contributor

static import sounds like a correct solution ... I have not checked rigorously ;) ... but I do not like the UX implications for macros, or the language generally.

static import would add a way to differentiate compile-time-only deps; but it's already not used for consts and we would not have any desire to force it to be used for consts, so it's not a great representation of that concept.

Why would we not have any desire to force it for consts? Because suddenly you would have to both import and static import the same library in lots of places: it's very common for a library to provide both const and non-const expressions.

And this same problem would apply to libraries declaring macros. It will be very common to provide both a macro and some code and/or deps that are additional to the macro, to help the user use it and for use in the implementation. So you would have to both import and import static the same macro library, and that will be just straight up confusing for users.

The important distinction for macros is not "used only at compile time", it's "goes in the macro program" as distinguished from "goes in the user program". I remain convinced ;) that the right way out of this is to physically split the macro program and user program by putting the macro implementation elsewhere.

For dartdoc, the import is not an import, it's only for the dartdoc tool; I think it belongs in metadata such as an annotation or comment.

Thanks :)

@jakemac53
Copy link
Contributor

Fwiw, annotations today do have the exact same issue as macros, in terms of mixing compile time and runtime dependencies (modulo dart:mirrors, but the VM could choose to just have compile time deps available I suppose, to support reflecting on annotations).

It is possibly easier to not do the wrong thing with an annotation, but nothing prevents them from having tons of deps, just due to a pathological package structure.

So, I do think that an import static could be general purpose feature which was worth having. It does mean in some cases having two imports where you only need one today. But it would encourage actually separating those libraries into the runtime vs compile time parts, which would be beneficial probably.

@lrhn
Copy link
Member

lrhn commented Mar 21, 2024

A compile-time only import, which can only be used for annotations and dartdoc, could be useful.

That would still make a distinction between annotations that are compile time-only, and those that can survive until runtime.
You can't use a compile-time-only argument in a runtime annotation, but nothing in the syntax distinguishes the two.

Even if an import is compile-time only, the names it introduces should probably still be counted as being there, and can conflict with "normal" imports. Type just not allowed to refer to them.

Maybe such an import should always be prefixed, so it's

import "helper.dart" as static $;


@$.Annotation(42)

@davidmorgan
Copy link
Contributor

Unfortunately, conflating imports for dartdoc with imports needed for the compile would severely threaten the performance, and even the feasability, of the Dart compile.

By their very nature dartdoc "imports" are likely to introduce dependencies that you would never allow in the build: it's useful to mention symbols from a wider scope, introducing all kinds of circular dependencies and layering violations. They would be bad for performance and would break any build system that prohibits circular dependencies, such as bazel.

Modular builds would be forced keep track of which static imports are just for dartdoc in order to ignore them.

Build graphs of large projects already require a lot of maintenance work / manual pruning just to keep the build viable. Any feature that causes over-globbing risks making those builds not viable, and would certainly land back on us as additional issues filed like "this workflow / whole program compile is too slow, please optimize".

What we need is that with as little fuss as possible all deps end up in buckets that say precisely when they are needed :)

@jakemac53
Copy link
Contributor

jakemac53 commented Mar 21, 2024

What we need is that with as little fuss as possible all deps end up in buckets that say precisely when they are needed :)

Like, a macro library? Fwiw, this proposal more directly and more efficiently solves the problem of mixed runtime/comptime deps. It is not possible to write a macro that is still there at runtime with this feature, unlike all the other proposals.

The user experience is also better IMO, for both the consumer and the producer - modulo testing getting a bit more difficult.

@davidmorgan
Copy link
Contributor

Thanks Jake ... taking a closer look at "macro library" is instructive, I now think it is unfortunately simpler than what we need.

To see this, consider two nested macros: macro A which in its implementation code applies macro B, as a fully hidden implementation detail.

Macro A and macro B are both "macro libraries", but from the point of view of macro A, macro B's implementation code is different to code that it uses directly. They are both "compile time" dependencies, yes, but it's a different compile, a different program.

There are not just two programs, the user program and the macro program: there are an unlimited number of programs, one per macro, with an arbitrary dependency graph between them. And you want to be able to "unwrap" macro code, for example for testing, and run it as "user" code while keeping more-nested macro code as "macro" code.

I suspect that the simplest concept that is sufficiently powerful here gives us, without too much fuss, a build graph in which "applies as a macro" is a different kind of dependency.

One particular file might be used only in the user program, or in the user program + the macro A program, or in just the macro A program, or in the macro A program and the macro B program. Any build system that is modular and/or incremental can reasonably want to treat these cases differently, and wants all of this information to be readily available.

Once the build graph is available to tools, "compile time only" could maybe be implemented as a lint that you would suppress to run the macro code as user code for testing :)

@jakemac53
Copy link
Contributor

Macro A and macro B are both "macro libraries", but from the point of view of macro A, macro B's implementation code is different to code that it uses directly. They are both "compile time" dependencies, yes, but it's a different compile, a different program.

Fwiw, I would like this to be supported via reflected imports (this would replace the resolveIdentifier API).

So if a macro is only emitting into code another macro annotation, we would have a separation there, if we also had that feature.

If a macro wants to directly invoke another macro, I think we would say that "macro libraries" are allowed to do that, unlike regular libraries (I can't recall if the proposal covers this already).

Maybe then macro unit tests become "macro libraries" too, but I haven't had a chance to fully explore this.

@davidmorgan
Copy link
Contributor

I think reflected imports is different to what I meant--but it's another good example of a case where there is a new type of dependency being introduced that build systems might want to handle differently for performance. I think we will want to surface this at the package metadata level, too. (Similarly: bazel will need them).

Back to what I was rambling about ;) ... really the key difference here is that we now have multiple entrypoints in the same program. We have the main entrypoint, and zero or more macro entrypoints, each of which corresponds to a separately-compiled program; all the programs are different and they almost always have different source sets. Then, there is a dep graph between the entrypoints: to compile this entrypoint you must first compile this one, and for this one you need this other one, and so on.

Currently the source sets are usually nested: if a program uses a macro, all the sources in the macro program are in the user program, whether they are needed or not; unless conditional imports are used, then they can be different, you can have macro source that is not user source.

@davidmorgan
Copy link
Contributor

Closing in favour of #3728 which looks like the route we'll take here.

@modulovalue
Copy link
Author

@davidmorgan, as per #3728 (comment), that issue focuses on a different aspect of the macro proposal. This issue is about introducing a unique syntax for macro metadata to be used by macro users (e.g., @!SomeMacro) to explicitly indicate that some declaration is eligible for macro expansion.

Could you reopen this issue, or was there a definitive decision made not to consider it?

Unfortunately, I don't have the time currently to contribute extensively to the discussion, but I still believe this is an approach that would lead to fewer issues long term. It seems to me that being explicit about macro applications has been very successful in the Rust ecosystem, and not doing so is a common annoyance in the Java ecosystem.

@davidmorgan
Copy link
Contributor

davidmorgan commented May 31, 2024

Thanks @modulovalue.

We are not currently considering any approaches that involve new syntaxes or anything else new; rather we are working towards #3728 which reduces the difference between macros and annotations, making macros just annotations with special properties.

We would have to change direction from #3728 for there to be any possibility of movement on this issue, which is why I closed this in favour of that.

[edit: fix the link to be to the correct issue]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
static-metaprogramming Issues related to static metaprogramming
Projects
None yet
Development

No branches or pull requests

9 participants