Chained Task
A Chained Task
Assumptions Ahead
These examples assume you are using the permissive whitelist. If you restrict the whitelist, these examples will need to be modified.
Minimum Version Required
These examples assume you are running v14.1.1 or later. Older versions of Computes will not run these examples correctly.
Next let's run a more complicated task. This task will show you how to have chained tasks, or tasks that create other tasks. Make sure you've looked at Your First Task and Getting Started with Computes and have the IPFS and Computes daemon running on your machine.
Like the Your First Task, this task will take an input value and split it into an array of numbers. One difference you'll see here is that the taskDefinition
is a link instead of containing the JSON data itself. The benefits of this will be made clear in just a sec.
1. Copy These Files
{
"runner": {
"type": "docker-json-runner",
"manifest": {
"*": {
"image": "computes/fibonacci-sum-map:latest"
}
}
},
"result": {
"action": "append",
"destination": {
"dataset": {
"init": "chained-task-example-results"
},
"path": "map/results"
}
}
}
{
"runner": {
"type": "docker-json-runner",
"manifest": {
"*": {
"image": "computes/fibonacci-sum-split:latest"
}
}
},
"result": {
"action": "set",
"destination": {
"dataset": {
"init": "chained-task-example-results"
},
"path": "split/results"
}
},
"conditions": [
{
"name": "Create Map tasks",
"condition":
"exist(dataset(hpcp(join(initref('chained-task-example-results'), 'split/results')))) && !exist(dataset(hpcp(join(initref('chained-task-example-results'), 'map/results'))))",
"taskDefinition": {
"/": "<map-task-definition-hash-here>"
},
"action": "map",
"source": {
"dataset": {
"init": "chained-task-example-results"
},
"path": "split/results"
}
}
]
}
{
"input": {
"dataset": 2
},
"taskDefinition": {
"/": "<split-task-definition-hash-here>"
},
"status": {
"init": "chained-task-example-status"
}
}
The split-task-definition.json has a new addition: conditions
. The conditions are user-definable rules that allow you to conditionally create a new task. In this case, we look up the latest version of the result dataset, and see if there is a value at split/results
and no value at map/results
. If those conditions are true, we will create new tasks from the <map-task-definition-hash-here>
task definition. In this particular case, we use the action: map
in order to create as many tasks as there are results in the split/results
key. In our case, there should be two.
Before we are able to run this task, we'll need to fill in some of those values.
2. Add map-task-definition.json to IPFS
Before the split-task-definition can be added, the linked map-task-definition must be added first, so we know the hash to link to. This may feel backwards, it is due to the nature of IPFS in which all files are immutable, meaning they can't be changed after they have been added, and that they are content addressed. Content addressed means that as the contents change, the "name" of the file stored in IPFS changes as well. If we want to point to the correct map-task-definition.json
in IPFS, we need to know the exact hash that IPFS has assigned to it.
cat map-task-definition.json | ipfs dag put > map-task-definition.hash
3. Edit split-task-definition.json
Replace <map-task-definition-hash-here>
with the value found in map-task-definition.hash
.
4. Add split-task-definition.json to IPFS
Before the task can be added, the linked task-definition must be added first, so we know the hash to link to.
cat split-task-definition.json | ipfs dag put > split-task-definition.hash
5. Edit split-task.json
Replace <split-task-definition-hash-here>
with the value from split-task-definition.hash
.
6. Add split-task.json to IPFS
Add the task to IPFS so we can enqueue it.
cat split-task.json | ipfs dag put > split-task.hash
7. Enqueue split-task.json
Enqueue the task so it can be run by Computes.
cat split-task.hash | computes-cli task enqueue
8. Check the results
You may periodically run this command to see if the results are in. In this case we should see some data under split/results
and map/results
.
cat split-task.hash | computes-cli task dataset
Expected Results:
{
"map": {
"results": [
1,
1
]
},
"split": {
"results": [
1,
2
]
}
}
Updated over 6 years ago