AnsweredAssumed Answered

Activiti 6 pluggable persistence feedback

Question asked by bjoern_s on Nov 20, 2015
Latest reply on Feb 16, 2016 by jbarrez
Hello,

I do not need any help here, this is just a feedback ;)  I like to share my first experience using the new pluggable persistence facility. My current use case is to completely replacing the "core" persistence implementation by a Hibernate JPA based implementation. This works well so far, but I found following drawbacks/issues

1. There is a strange "setId" invocation on the  ProcessDefinitionEntity which replaces the (JPA generated) id with another String in the form workflowid:number:newid
So it is not possible to work with plain generated ids, because the process definition id seems to be "special"
-> workaround possible by setting a larger column length on this id and make it updateable


2. ExecutionEntity and TaskEntity implement/inherit from VariableScope. It would be nice if this dependency would be moved to a delegate class because it does not do anything about persistence at all and creates a lot of boilerplate 200+ lines in custom Implementations. For example the access could be refactored to ExecutionEntity/TaskEntitygetVarialbeScope()… and this may return a VariableScopeInstance.

3. Some interfaces leak ByteArrayRef. This class is a concrete final class and can not be subclassed. This makes implementing custom binary storage quiet difficult. My solution is always returning "new ByteArrayRef()" on all getters and implementing a custom blob field.

4. Hard to use JPA typical eager/lazy/batched loading. Due to the fact the references and collections are handled by setting and getting the ids and perform query on ids, the usage of JPA specific lazy/eager/batched collection fetching hard to implement.
This may be circumvented, but just a warning to all users who try the same

(!) 5. I think i found a major bug in TakeOutgoingSequenceFlowsOperation.leaveFlowNode which causes problems while executing a workflow within the same transaction in memory:
If more than one connection is leaving a node, a new execution is spawned. This execution is not added to the parent child executions list, only the parent id is set on the child.
So if the child executions are accessed within the same transaction, the spawned execution is missing. 
My current workaround is to add the current execution to the parent execution ececutions list every time setCurrentFlowElement is called on the ExecutionEntity
public void setCurrentFlowElement(FlowElement currentFlowElement) {
        // workaround hack to ensure parent has this element as child execution
        if (parentId != null) {
            ExecutionEntityJpa result = (ExecutionEntityJpa) RequestContext.getThreadContext().getManager().find(ExecutionEntityJpa.class, parentId);
            if (!result.getExecutions().contains(this)) {
                System.out.println("WORKARUND WORKING");
                result.addChildExecution(this);
            }
        }

}

This would be an easy fix by calling
addChildExecution(outgoingExecutionEntity)
in TakeOutgoingSequenceFlowsOperation.leaveFlowNode



Should I open an bug report for this (5.) ?

Björn S.


Outcomes