The corresponding challenge is #49.
A maintainer of a Linked Data Notifications (LDN) inbox wants to see some automated processes happening when particular LDN notifications appear in one of her inboxes or are being sent by her. Possible automated processes include:
- Auto reply to some messages
- Forward the notification to the LDN inbox of another autonomous agent
- Append outgoing messages to an outbox/sent container
- Generate a new notification to be sent to a particular LDN inbox
What should happen after receiving an incoming our outgoing message is described by an N3 rules. These rules could require read access to the pod (for example, to check the existence of a resource on the pod). An example of a rule in natural language is
IF inbox I contains a notification about a resource A,
AND resource A exists on pod P,
THEN generate new notification and send it to inbox X.
The automated processes and rules are executed by the orchestrator, which is an autonomous agent, on behalf of the maintainer.
We developed Koreografeye that allows users to
run automated processed against Solid pods.
Koreografeye provides two commands: orch
for the orchestrator and
pol
for the policy executor.
The orch
command takes the input data,
use the N3 rules to decide what to do with the data, and
put the results output folder.
The pol
command takes the output of the orch
command and
executes the requested policies defined with the N3 rules.
We made the following important technological decisions and assumptions:
-
We expect the solution to be part of a larger existing framework. Koreografeye doesn't implement some features on purpose such as scheduling, rate limits, input and outputs, priorities, and so on. For these features there are already existing tools that provide that. On other words Koreografeye should be able to be just one added component to for example an Apache Nifi installation.
-
The Composition of Koreografeye parts such as input, orchestration and policy execution are possible via the command line. That way it's programming language-independent. The JavaScript API is just an added feature and not the core of the design.
-
Developers can create a lot of plugins for policy enforcement and reasoner implementations. Component.JS was used to facilitate this.
-
Koreografeye makes a strict separation between reasoning and policy execution. We don't advise to create any side effects in the reasoning part for two reasons:
- The rule book that the reasoner gets as input can be from external sources. We don't want arbitrary execution of code.
- During the reasoning phase there is little control over possible side effects, such as when they are executed, in which order and how often.
-
Retrieving input from a (Solid) source is a solved issue. The Koreografeye starts when an RDF resource is copied into a local storage location accessible for Koreografeye,
-
Retrieving the rule book from an external source is a solved issue. Koreografeye expects a local storage location with zero or more rule book N3 or N3S (RDF Surfaces) files.
-
The order in which policy enforcements are executed is not relevant.
-
Rule books don't interfere with each other. Each rule book gets a new copy of the RDF input data
- User of Koreografeye
- User has Node.js installed.
-
Clone the demo repo via
git clone https://github.com/eyereasoner/KoreografeyeDemo.git`
-
Install dependencies via
npm i
-
Run the
orch
command to process all RDF resources in thein
directory using theldn.n3
rules file vianpm run orch:ldn
This generates the file
out/demo.ttl
as output containing the input RDF resource plus injected policies. -
Run the
pol
command against the output of step 3 vianpm run pol
This will send a notification to https://httpbin.org/post using the SendNotificationPlugin.
If https://httpbin.org/post pointed to a real Solid pod inbox, then the notification would be available there.
- Integrate the Event Notifications Typescript library or Bashlib into Koreografeye to provide out-of-the-box importing of notifications. This is now an external dependency.
- The current implementation is single threaded. Make options to specify how many threads can be used to execute the policies.
- Current systems makes no assumptions on how policies should be executed, in which order, and what to do when policies fail. Align with other groups to find a solution on how a composition of policies should be executed.
For real Solid integration processes need to be automated. Resources need to be also made available, updated, deleted when no user is available and no web browser is opened. In a decentralized world we can't expect that Solid pods will keep on adding logic to provide this automation steps. External services will provide this automation capacity.
The good part of having logic external to your application is that programmers only need to worry about their own problem domain. For example, I want that LDN notifications that are send to my pod are forwarded to my email address. This logic shouldn't be implemented by every Solid application that sends notifications to my pod. It's of no concern to these applications what kind of extra business rules are required when a notification arrives in the LDN inbox.
The bad part is like everything else in a decentralized world:
- Who keeps an overview of what process flows are running on your data?
- Who keeps an overview of resource dependencies?
For example, can I delete/update a resource when I don't know if another process still needs it?
It is possible to provide very limited remote execution near or inside a Solid pod when logic and handling side effects can be separated. An orchestrator could in principle also run inside a pod. But most probably one wants to provide it very limited capabilities to prevent users to run arbitrary code on the pod server itself. For example, an orchestrator that works only as an LDN inbox rule engine like one has in Outlook or Thunderbird would allow only moving incoming notifications to a specific container.
Although with Bashlib we have a command line tool to access a Solid pod and manage its content, remote server-side Solid data management is in general not a solved issue. Bashlib requires assumptions about specialised authentication methods for each Solid implementation. We currently depend on an authentication mechanism with tokens that only works for the Community Solid Server. These tokens allow doing authenticated requests without the need for the user to log in every certain amount of time. There are other pod servers that also provide such tokens, but they have their own mechanism to do this.