The official documentation is at: http://docs.alfresco.com
Table of Contents
- 1 Q/A
- 1.1 Saved Query Markup
- 1.2 Time Travel on the Alfresco Repository
- 1.3 Practical Limitations of Alfresco Virtualized Applications
- 1.4 What are the semantic issues with moving the new repo to DM/RM?
- 1.5 Un organized questions form the forums
I using this area to track questions - and the answers i get. that I have about alfresco in the different aspects pertaining to ECM, software development, business model and community. For a first pass I just went through the forums and pulled out a bunch of questions I had. Eventually I'd like to just start tracking interesting questions, when I found them, and when they were answered.
Saved Query Markup
Posted: Oct 31th
- Is there any documentation around the saved query markup?
- What are the long term plans for this markup?
- Is there API calls to invoke a search based on this markup?
Time Travel on the Alfresco Repository
Posted: Oct 15th
Answered: Oct 16th
One of the most exciting features of WCM is the the ability to take a snapshot of the repository. Users can then walk back and forth through time over the collection of snapshots. This mechanism relys on the new repository infrastructure which is SVN like in nature. Almost like a collection of SVNs.
Am I rightfully thinking that the repository infrastructure that undelies this capability is or will be common to the entire suite of alfresco ECM products?
If So, will the library and other services (exposed through API) allow me take advantage of this capability? For example, Will I be able to traverse snapshots of the store via JCR?
For the foreseeable future snapshotting and time-travel, (not yet exposed in the preview) will not be available for the entire ECM stack. In order to achieve the 'magic' of time-travel, a significantly different repository architecture was needed, one that has some fairly fundamental semantic differences from the Alfresco DM repository. We will be working on tighter and tighter integration of the two models but don't have a clear path yet to 'every feature available for all content'.
Practical Limitations of Alfresco Virtualized Applications
Posted: Oct 18th
Also, what are the intentions for handling data in external sources? Kevin mentioned in his pod cast that you would be able to role the system back to a point in time where your profile looked like X. I can see this as a possibility as long as your data stored within the alfresco repository. But virtualized applications and in particular a virutalized application from the past get a little bit tricky when they rely heavily on resources that are outside the system and perhaps no longer exist. Is there a good solution for this?
What do you imagine are the practical limitations of the viritualized application system?
What are the semantic issues with moving the new repo to DM/RM?
Posted: Oct 18th
I can understand that. It is basically a whole new approach. Wow it will be neat to be able to apply the same type of functionality to the CIFS server and FTP server. These two functionalities are like tomcat in that if you are lying to the client (as WCM lies to tomcat about its application source) then we are fine because they as clients could care less.
I guess it gets harder when we start looking at some of the services and questioning how we map the old service APIs on top of the new repository. Maybe that is not difficult, perhaps its already done? Are the questions that remain revolve around how to expose the ability to access a snapsot?
I am a little confused is the repository different in the DM product vs the WCM. Obviously they are the same client. I was just peeking at the tables and it doesnt look entirely foreign to me. Have you already replaced NodeService with a new implementation for example?
Un organized questions form the forums
I know that there are a number of transformation capabilities already working with alfresco, free marked, pdf box, image magik.
One of the architectual questions I have is if we will be putting in place a pipeline based transformation engine (building or leveraging) something like cocoon.
Maybe this is unneccisary with the advancement in the workflow capabilities. I guess it all depends on what granularity you think the tasks in a workflow should be imlemented at. Anyway I'd be curios to here what the alfresco/community take is on this.
In order to compete with other ECM we will ahve to provide a mechanism to add components (as kevin stated in his pod cast) like bread crumbs navs etc. Will there be some sort of catalog? Are we going to allow the user to drag and drop these components around and visually build a layout?
These are things I have been attempting to accomplish through the use of the portal. For the sake of the ECM product portal may be a player but it cant be the only answer (as many portals are frankly very heavy). -- yet you can imagine there is much overlap in the needs.
W 2.0 allows us to achieve some of the dynamic modulary we might have used portal for in the past. I think there are several approaches to making this work but I am curious on what we think might be a good model?
Will we allow the WCM to consume documents that are links to documents somewhere else in the repository? I notice that this does not seem to be possible in the preview
Time Travel Mechanism
Will we be able to 'time travel' via all of the interfaces?
o WS / REST etc?
Before the conversation of transparent layers came up on the wiki I was planning on enabling time traveling through the use of 'targets'. If you have no idea what a targeter is then think of it as a query that returns a set of items from a repository with runtime context relevant arguments. In actuality a targeter can be something much more then a ?Query? it can be full out business rules.
If I understand correctly; to time travel based on the transparent layers machine you have to attach to the repository and look through the ?lens? of the layer which is an interface ? the implementation is the actual corpus under the layer. This means you can attach with JCR, Alfresco APIs, FTP, and CIFS and see whatever the layer sees? pretty cool.
Targets are a very powerful mechanism for allowing your staff to put rules in place that take effect on there own at run time. I think the two concepts work well together. Prior to the layers concept I was going to have to put a condition in the targeter for the current slice of time the rule should apply to and then allow the business team to move the notion of what day time it is that they are running under. This might still be useful for some things, but not for wholesale time-travel. On the other hand, just as I was going to have to introduce undesirable concerns in to the targeter for time travel, I think the same problem could be replicated by using transparent layers to execute all kinds of runtime business rules for the website.
Rules at the layers level are executed by the repository and are wholesale, and transparent to the applications using the repository.
Rules executed by the applications using the repository obviously have granularity and isolation.
What I don?t know at the moment is the implementation details of the Layers. It sounds like the version of the folder the layer is looking at is one of the parameters of the layer. At first I didn?t like that very much but the more I toy with they idea in my mind the more it seems appropriate. Version is a core facet of the repository and layers are intertwined with the structure of the repository. You will want to version the parameters and existence of a layer just like any other folder.
So if I have got it, a layer is a folder, version pair that looks itself just like a folder but behaves with all its magical layer behavior.
That means if I point to some other type of type of folder, say a dynamic folder (a folder which is a virtual thing, its contents the result of a query/transform) then the layer can invoke that folder and whatever version of the query/transform pair exists at that time.
It gets a little interesting when you start moving pointers around. You find that in previous points in times a layer may have pointed to some version but was later updated to point to some other version or destination altogether. Something that can get pretty damn confusing and even more semantically troubling when you throw dynamically rendering folders and recursion in to the mix.
I don?t have it all straight in my head yet but at the moment I am in a little bit of the? will I get in a [What is real?] mode if I use these kinds of capabilities in sophisticated ways.
It?s important to figure out the syntax, the semantics, and what the user or at least the coordinator will need to be able to visualize what the hell is actually going on and some way to help people understand how the complexity they are building is evolving.
I have been doing a lot of thinking about the M2 model these days. In the passed I have mentioned that hard content (physical docs) is something I think the system should be able to handle. By handle I mean carry metadata on the object, be searchable, and be able to apply workflow and librarian functionality around.
It seems like I may have come across another type. Remote content (hard and soft).
The more I think about it the more I am tempted to say that the content model is independent of these things (hard, digital, local, remote). Currently the only cohesion with the concept is the fact that the cm:content carries a content property.
So here is what I am thinking (blast me if I am out in space):
refactor cm:content (take out the content property).
Create a couple of aspects:
Digital content (this repository owns and controls)
Digital remote content (this repository tracks this content but doesnÃ¢â‚¬â„¢t carry it)
Perhaps URL or whatever id the retrieval system would need to get a hold of the thing.
Hard content (physical photo, contract, etc):
Library card info
Contact info for access to doc
From the user perspective not much would change. If you do a create action or upload content we assume you are dealing with digital content and assign the digital aspect..
If you do an add content we ask what kind you are adding.
Also this allows documents to be both digital and hard (in the case of a conversion). I havenÃ¢â‚¬â„¢t given that enough thought. The aspect seems to be a better option then creating types because types create an explosion of the model (cartiasn product).
If this a good idea then maybe we can vet it out here a little bit and then port the results to the wikki. If it's a bad idea then I'll just have to take my beatings as they come Smile
=more on content
At my organization I have information (like film) sitting on a shelf in a stack.
Should alfresco some sort of native concept of hardMedia/Document?
Something I can say has metadata but no actual information (because it's not digital and cant be in the repository).
I could have a basic librarian functionality around that thing.
- where is the actual thing stored
- is it currently in that location
- if not who has it checked out
- what is their intention
- when is it due back
- audit of this activity.
also I have hardMedia information which at some point is converted to digital format. I'd like to be able to add the new softMedia to the repository and then semantically link it to the hardMedia item.
lastly I have new media that is generated as a composit of clips from existing media. I aught to be able to know where that media came from
and in terms of the parent, is there any portions stored as separate documents or as a composit of another document(s).
The thing is I have tons of information. It's not all digital. I would like to go to one place (my content repository) and manage my content. The repository may actually contain every scrap of information but it has metadata for every scrap of information.