
If you block on this, then you risk deadlocking the entire application. The thread pool executor on the other hand uses a fixed number of threads. Asteroids, Centipede, Major Havoc, Missile Command, Akka Arrh. So, that's why you see the thread pool grow. Test Selection Tempest run has several options:-regex/-r: This is a selection regex. If you block in a thread in its pool, it will start another thread to ensure other tasks can continue. This is demonstrated in the following example. The behavior of an UntypedPersistentActor is defined by implementing OnRecover and OnCommand methods.

An actor that extends this class uses the persist method to persist and handle events. The default dispatcher that Akka uses is a fork join pool. Akka persistence supports event sourcing with the UntypedPersistentActor abstract class. (doing so would require launching, and coordinating between, multiple independent processes). From akka cluster-sharding: val counterRegion: ActorRef ClusterSharding(system).start( typeName 'Counter', entryProps Some(PropsCounter), idExtractor idExtractor, shardResolver shardResolver) And then it resolves the Entry actor that receives the message based on how you define the idExtractor. Blocking requires one thread per request. The Scala Actor Model, more specifically, Akka, can help. If you only using non blocking calls, you will see the thread pools behave with very low thread counts, and you won't find things going unresponsive.īut the moment you start blocking, all bets are off. Its thread pools a tuned for non blocking. For example, we can add a fixed size buffer with different strategies: stream.buffer (100, OverflowStrategy.dropTail) In this case, up to 100 elements will be collected, and on the arrival of 101, the youngest element will be dropped. All the IO and inter-service communication mechanisms it provides are non blocking. Akka Streams are back-pressured by default, but it is possible to alter this behaviour.


So in that case why would we need to create threads proportional to number of concurrent requestsĪre you blocking in your calls? Eg, are you calling Thread.sleep? Or using some synchronous IO? If so, then what you're seeing is entirely expected. Several Case Logo Sports Caps are available, as well as Case belt buckles and Case key rings Presenting Kkk 1980sat. In the akka docs introduction it highlights “Millions of actors can be efficiently scheduled on a dozen of threads”. I’m running the shopping-cart example from lagom-samples. Is this proportional increase of threads an expected behaviour? How do we control this and reuse the same actors or kill the actors after request has been served. Also after the requests stop, the thread count don’t go down. We observe that on increasing concurrent http calls to our service, thread count (-dispatcher) keeps increasing (see screenshot from visualVM).
