Show HN: Kameo – Fault-tolerant async actors built on Tokio

github.com

74 points by tqwewe 8 hours ago

Hi HN,

I’m excited to share Kameo, a lightweight Rust library that helps you build fault-tolerant, distributed, and asynchronous actors. If you're working on distributed systems, microservices, or real-time applications, Kameo offers a simple yet powerful API for handling concurrency, panic recovery, and remote messaging between nodes.

Key Features:

- Async Rust: Each actor runs as a separate Tokio task, making concurrency management simple.

- Remote Messaging: Seamlessly send messages to actors across different nodes.

- Supervision and Fault Tolerance: Create self-healing systems with actor hierarchies.

- Backpressure Support: Supports bounded and unbounded mpsc messaging.

I built Kameo because I wanted a more intuitive, scalable solution for distributed Rust applications. I’d love feedback from the HN community and contributions from anyone interested in Rust and actor-based systems.

Check out the project on GitHub: https://github.com/tqwewe/kameo

Looking forward to hearing your thoughts!

binary132 4 minutes ago

What I find myself wondering (maybe based on a superficial understanding) is how this is fundamentally different from, or better than grpc, and whether it could be used as an implementation of grpc.

wildlogic 4 hours ago

Hi - any documentation regarding actor registration? Is there a conventional way to inform a remote actor about a new actor? Would this be sent in a message? How does the actor.register('name') work? Maybe could be a useful addition to the documentation. Thanks.

BWStearns 8 hours ago

Looks very cool. Is there any documentation on how it works for communication over a network? I see the remote/swarm section but is there an overview somewhere?

fuddle 4 hours ago

Looks good, it would be great to see more examples in the docs.

throwawaymaths 8 hours ago

Is this actually distributed? I see no evidence that this can be used in conjunction with even ipc with builtin features.

  • hansonkd 7 hours ago

    Check the examples folder.

    • qwertox 7 hours ago

      https://github.com/tqwewe/kameo/blob/main/examples/remote.rs

          // Bootstrap the actor swarm
          if is_host {
              ActorSwarm::bootstrap()?
                  .listen_on("/ip4/0.0.0.0/udp/8020/quic-v1".parse()?)
                  .await?;
          } else {
              ActorSwarm::bootstrap()?.dial(
                  DialOpts::unknown_peer_id()
                      .address("/ip4/0.0.0.0/udp/8020/quic-v1".parse()?)
                      .build(),
              );
          }
      
      
          let remote_actor_ref = RemoteActorRef::<MyActor>::lookup("my_actor").await?;
              match remote_actor_ref {
                  Some(remote_actor_ref) => {
                      let count = remote_actor_ref.ask(&Inc { amount: 10 }).send().await?;
                      println!("Incremented! Count is {count}");
                  }
                  ...
    • throwawaymaths 5 hours ago

      Thanks! It's not in the front page material.

__erik 8 hours ago

This looks really nice! Curious if its running in production anywhere

  • qwertox 7 hours ago

    I agree, really nice syntax.

    There's a limitation mentioned in the docs:

      While messages are processed sequentially within a single actor, Kameo allows for concurrent processing across multiple actors.
    
    which is justified via

      This [sequential processing] model also ensures that messages are processed in the order they are received, which can be critical for maintaining consistency and correctness in certain applications.
    
    I agree to this and it gives the library a well defined use.

    Docs and examples are well made.

    • zackangelo 2 hours ago

      This limitation is common to most implementations of the actor model. In fact, I think a lot of people would consider it a feature, not a limitation because it allows you to reason about your concurrent behavior in a more straightforward way.

m00x 8 hours ago

What are the advantages and disadvantages vs using Actix or Ractor?

  • status_quo69 3 hours ago

    I actually went through this exact exercise recently, but this library didn't show up in my searches for a good rust actor framework, so take that with a grain of salt. It looks very similar to the interface provided by actix at first blush, not sure how supervision works. My take is that most of these frameworks tend to solve with the same(ish) solution, so pick the one that has the best api. I liked ractor, although not having &mut self eventually wore me down. I swapped a small side project to use Stakker instead and while at first the macros intimidated me, the implementation really impressed me in terms of performance and API characteristics. It really feels like there's just enough there and no more.

  • stusmall 3 hours ago

    The actix crate is deprecated. I looked on their site and repo and couldn't find an official announcement of deprecation but here is a link to what the lead said when I reached out with questions a few months ago: https://discord.com/channels/771444961383153695/771447523956...

    EDIT: Tangent, but if anyone has experience making deterministic actor model systems that can be run under a property test I'd love to know more. It would make an amazing blog post even if it would have a very narrow audience

  • mtndew4brkfst 3 hours ago

    PSA: Actix (not Actix-web) is fairly inactive - one of the maintainers informally said not to use it for any new projects during one of this year's RustConf chats.

TheMagicHorsey 8 hours ago

Looks really nice.

But sometimes when I see projects like this in other languages, I think, are you sure you don't want to use Erlang or something else on the BEAM runtime and just call Rust or C via their NIFs?

I used Erlang about a decade ago, and even then it was so robust, easy to use, and mature. Granted you have to offload anything performance-sensitive to native functions but the interface was straightforward.

In the Erlang community back then there were always legends about how Whatsapp had only 10 people and 40 servers to serve 1 Billion customers. Probably an exaggeration, but I could totally see it being true. That's how well thought out and robust it was.

Having said all that, I don't mean to diminish your accomplishment here. This is very cool!

  • hansonkd 7 hours ago

    I think a lot of issues BEAM was trying to solve were solved by processors getting bigger and more cores.

    BEAM's benefit 10-20 years ago where that inter-node communication was essentially the same as communicating in the same process. Meaning i could talk to an actor on a different machine the same way as if it was in the same process.

    These days people just spin up more cores on one machine. Getting good performance out of multi-node erlang is a challenge and only really works if you can host all the servers on one rack to simulate a multi-core machine. The built in distributed part of erlang doesn't work so well in modern VPS / AWS setup, although some try.

    • binary132 11 minutes ago

      “Just spin up more cores on one machine” has a pretty low scale ceiling, don’t you think? What, 96 cores? Maybe a few more on ARM? What do you do when you need thousands or tens of thousands of cores?

      Well, what I do is think of functions as services, and there are different ways to get that, but BEAM / OTP are surely among them.

  • greenavocado 7 hours ago

    Massive context switching and type checking on Erlang is inferior.

assassinator42 7 hours ago

[dead]

  • kazinator 6 hours ago

    Yeah how about something much older. I vote for Lode Runner.