AnsweredAssumed Answered

5.9 and exclusive jobs

Question asked by heymjo on Mar 9, 2012
Latest reply on Mar 9, 2012 by meyerd
Hi,

Just wanted to share this observation, maybe other people have the same opinion.

The doc says about exclusive jobs:

It is actually not a performance issue. Performance is an issue under heavy load. Heavy load means that all worker threads of the job executor are busy all the time.

I wanted to get a feeling for the impact of this and created a small test case (see attached picture). Just imagine that the actual process is much larger than that but all non-service tasks have been stripped from the process flow for the sake of clearness.

[attachment=0]exclusivejobs.jpg[/attachment]

All tasks are defined as <serviceTask id="servicetaskX" name="Service Task" activiti:delegateExpression="${theServiceTask}" activiti:async="true" /> and the JavaDelegate implementation is just this

@Component("theServiceTask")
public class ServiceTask implements JavaDelegate {
    private AtomicInteger order = new AtomicInteger(0);

@Override
    public void execute(DelegateExecution execution) throws Exception {
        order.getAndIncrement();
        int sleeptimeMilliseconds = new Random().nextInt(5) * 1000;
        long id = Thread.currentThread().getId();
        System.out.println(order.get() + " - thread " + id + " sleeping " + sleeptimeMilliseconds + " ms");
        Thread.sleep(sleeptimeMilliseconds);
        System.out.println(order.get() + " - thread " + id + " finished");
    }
}

When i start this process i get this output, totally expected


1 - thread 20 sleeping 3000 ms
1 - thread 20 finished
2 - thread 20 sleeping 3000 ms
2 - thread 20 finished
3 - thread 20 sleeping 3000 ms
3 - thread 20 finished
4 - thread 20 sleeping 1000 ms
4 - thread 20 finished
5 - thread 20 sleeping 3000 ms
5 - thread 20 finished
6 - thread 20 sleeping 2000 ms
6 - thread 20 finished
7 - thread 20 sleeping 0 ms
7 - thread 20 finished
8 - thread 20 sleeping 2000 ms
8 - thread 20 finished
9 - thread 20 sleeping 3000 ms
9 - thread 20 finished
10 - thread 20 sleeping 4000 ms
10 - thread 20 finished
11 - thread 20 sleeping 3000 ms
11 - thread 20 finished
12 - thread 20 sleeping 0 ms
12 - thread 20 finished
13 - thread 20 sleeping 0 ms
13 - thread 20 finished

We observe that 1) all service tasks are executed on the same thread and 2) they are executed sequentially.

For me the impact of this is that we should NEVER put any long-executing async service tasks (e.g. report generation) in a process definition because they can potentially block all execution paths of the process instance. Rather we should have the service task finish quickly, and do the real execution async somewhere else. A receive task after the service task could wait for execution to continue.

So it is true that the current implementation does not impact process performance as such, but the risk of creating "bottlenecks" is very real if any of your service tasks is long running. In "real world" processes it are the machine oriented process flows (=many service tasks, little or no user tasks) that will suffer more from this.

Thanks for any thoughts on this.

Attachments

Outcomes