Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Actor state via messages #190

Draft
wants to merge 16 commits into
base: master
Choose a base branch
from
Draft

Actor state via messages #190

wants to merge 16 commits into from

Conversation

goodboy
Copy link
Owner

@goodboy goodboy commented Jan 24, 2021

An example for @fjarri doing "actor state mutation via messages".

Feel free to criticize the heck out of it.

If you want to see how subactors could modify each other's state we can add that as well.

This is a draft of the `tractor` way to implement the example from the
"processs pool" in the stdlib's `concurrent.futures` docs:

https://docs.python.org/3/library/concurrent.futures.html#processpoolexecutor-example

Our runtime is of course slower to startup but once up we of course get
the same performance, this confirms that we need to focus some effort
not on warm up and teardown times.  The mp forkserver method definitely
improves startup delay; rolling our own will likely be a good hot spot
to play with.

What's really nice is our implementation is done in approx 10th the code ;)

Also, do we want offer and interface that yields results as they arrive?

Relates to #175
import trio
import tractor

_snd_chan, _recv_chan = trio.open_memory_channel(100)
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry ignore this..
got left in from another idea.

import tractor

_snd_chan, _recv_chan = trio.open_memory_channel(100)
_actor_state = {'some_state_stuff': None}
Copy link
Owner Author

@goodboy goodboy Jan 24, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a module level variable meaning it maintains it's state for the entire process lifetime.
We could have just as easily made some function that "sleeps forever" and wakes up incrementally to report it's state if need be.



@dataclass
class MyProcessStateThing:
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of this what are you after?

A function that creates some object and then makes that object mutateable from another inbound message?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe you want something like rays "actors" (which i would argue aren't really "actor model" actors):
https://docs.ray.io/en/latest/actors.html

We can also accomplish this but it will require a slight bit more machinery.

@goodboy
Copy link
Owner Author

goodboy commented Jan 25, 2021

How's that 2nd example @fjarri for a class / ray style actors api?

# in a global var or are another clas scoped variable?
# If you want it somehow persisted in another namespace
# I'd be interested to know "where".
actor = ActorState()
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I get this isn't ideal (though it really is no performance hit) in an idiomatic python sense, but the alternative is some other way to store this instance across function-task calls.

The normal way would be a module level variable (since they're "globally scoped") but I guess in theory you could have a function that stays alive and constantly passes the instance to other tasks over a memory channel - still in that case how does the new task get access to the channel handle?). The alternative is a module level class which has a class level variable which is again globally scoped on the class.

)


def get_method_proxy(portal, target=ActorState) -> MethodProxy:
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In cases anyone gets cranky about this, from pykka docs:

The proxy object will use introspection to figure out what public attributes and methods the actor has, and then mirror the full API of the actor. Any attribute or method prefixed with underscore will be ignored, which is the convention for keeping stuff private in Python.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fjarri
Copy link

fjarri commented Jan 26, 2021

Sorry, there's been a lot of input from you in different channels, I'll summarize my thoughts here.

I'm afraid I still don't see the functionality (or even ways to implement the functionality) I need. When I think of an actor system, these are some of things I'm worried about:

  • how can a child send a message to its parent?
  • if a child wants to terminate or restart itself, how will the parent handle it?
  • how can I make message handlers async? How can I guarantee that as long as one message handler of an actor is running, another one will not be started?
  • how can I start a child in the same thread/other thread/other process with the same API, without changing the child's code?
  • how can I delay the message delivery?
  • how can I monitor the state of an actor from another actor (not necessarily the parent)?

I can see how my API (in https://gist.github.com/fjarri/803d2611fc8487c7853f17ed4ad1ec10) allows one to implement this functionality. The framework will have enough information to handle these problems. Right now tractor seems too low-level for all that. I will have no more help from it than I do from trio itself, except for more convenient multiprocessing.

@goodboy
Copy link
Owner Author

goodboy commented Jan 27, 2021

@fjarri thanks so much for the writeup 👍🏼.

I'll address these in order:

how can a child send a message to its paren

async with tractor.wait_for_actor('parent_name') as portal:
    ... do stuff with portal...

This is documented only very briefly. Note the "arbiter" stuff there is wrong and going away.

if a child wants to terminate or restart itself, how will the parent handle it?

However it wants to. Restarts are a naive basic operation the exact same as spawning or running a task through a portal. If you want a special error that the parent expects to handle then you can just raise it and expect it on the parent's side or you could just have the result of the remote task return a restart message of your choosing. I've pushed up an example showing both multiple task(s) restarts within a single subactor as well as restarts of multiple subactors entirely themselves:
https://github.com/goodboy/tractor/pull/190/files#diff-3d37ffe6c0c1d6567affe87d7c0edca4c18adb1d5735075084263351e1f642f8

To understand what's going more easily from a terminal you can run it with the following command:
$TERM -e watch -n 0.1 "pstree -a $$" & python examples/actors/most_basic_supervisor.py || kill $!

how can I start a child in the same thread/other thread/other process with the same API, without changing the child's code?

I honestly have no idea what you're asking here so you'll have to clarify. There is currently no hot-reloading of code support yet mostly because python doesn't have great facilities for this and thus it needs some thinking but we've had lots of discussion.

how can I delay the message delivery?

Again don't know what you mean. If you want a reactive style delay there are already trio libs underway supporting this.

how can I monitor the state of an actor from another actor (not necessarily the parent)?

As per the 1st bullet, contact the actor by name, get a portal, send it a message, get a response.
There is no built-in "monitoring" system on purpose for now.

# We'll show both.

async with trio.open_nursery() as n:
# we'll doe the first as a lone task restart in a daemon actor
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

brutal typo

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's clearly 3 tasks that are constantly restarted...

for i in range(4):
n.start_soon(respawn_remote_task, p0)

# Open another nursery that will respawn sub-actors
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nope x this. no extra nursery required.. originally it had that but we don't need it.

# Open another nursery that will respawn sub-actors

# spawn a set of subactors that will signal restart
# of the group of processes on each failures
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

still need to add a loop around the nursery to restart everything if we get a certain error raised. this starts getting into a more formal supervisor strategy api that we have yet to design.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm actualy just had the idea of doing something like:

async with tractor.supervise_with('one_for_one') as srat:
   async with tractor.open_nursery() as tn:
       .. do tractor stuff

@guilledk @salotz thots?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm we might have to plug into the nursery internals a bit more to get granular control on each sub-proc. needs a little more pondering fo sho.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh and common strat names from erlang:
https://learnyousomeerlang.com/supervisors

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually i guess you'd want to pass the nursery into the strat so that you get the granular control.. hmm that might compose nicely actually. Then in theory you could use a stack to compose multiple strats?

Woo this actually will be interesting i'm thinking.

Base automatically changed from eg_worker_poolz to master February 22, 2021 14:55
@goodboy goodboy mentioned this pull request Mar 4, 2021
@goodboy
Copy link
Owner Author

goodboy commented May 17, 2021

FYI we can probably vastly improve the example "proxy" code here with the new native bidir streaming support in #209 🏄🏼

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants