Skip to content

Latest commit

 

History

History
18 lines (11 loc) · 3.89 KB

File metadata and controls

18 lines (11 loc) · 3.89 KB

README for task-based orchestration

This describes extensions to chunks & rules to support a swarm of collaborating cognitive agents, where each can control multiple devices via their digital twins.

Chunks & Rules can be used for cognitive agents with event driven concurrent threads of behaviour. Applications can use the chunk library API to mimic human perception with code that senses the environment, listens for messages from other agents, maintains chunk graphs as live models of the environment, and queues chunks to module buffers as events to trigger the corresponding behaviours. Agents can message each other by name with @message or by topic with @topic, where agents subscribe to the topics of interest using @subscribe. Each message is a chunk. The underlying protocol needs to support reliable, timely, in-sequence message delivery, e.g. zenoh, MQTT, DDS, WebTRC and Web Sockets.

Actions can either operate on module chunk graphs or actuate devices. Applications can register custom actions to mimic the brain's cortico-cerebellar circuit, where real-time control is dynamically adapted using perception of sensory data, analogous to how you reach for a coffee cup, fine tuning the motion of your hand out of the corner of your eyes as your hand gets closer to the cup. See the bottling demo for an example of a cognitive agent that implements real-time control over conveyor belts, a robot arm and other manufacturing machines.

Tasks are an abstraction for named threads of behaviour. Rules can initiate tasks with @do task and signal success or failure with @do done and @do fail respectively, akin to JavaScript's resolve and reject. You can use @on to delegate a task to a named agent. @all can be used to signal when all of the associated tasks have successfully completed. @any can be used to signal when any of the tasks have succeeded, and @failed to signal when any of the tasks have failed. The demo task uses a custom operation @do timer that takes a random time in seconds in the range set by the min and max properties. A timer can be used to recover when tasks take too long to complete.

Swarms can be dynamic with agents entering and leaving the swarm. A simple approach to decentralised naming is for agents to name themselves with a large random integer. Agents can signal entering and leaving by publishing a message on an associated topic. For this purpose, it makes sense to include @from for the sender's name as part of the message chunk, where other properties can be used to describe the sender's capabilities. Messages can also be used to support consensus building, auctions, negotiations and distributed storage, as well as assigning agents to given roles.

Future work will look at machine learning, e.g. task-based reinforcement learning across multiple agents. By keeping track of which rules were used in a given task, the agent can update each rule's strength based upon it's utility in attaining goals. The stronger the rule, the more likely it will be selected for execution. Ineffective rules will be forgotten. Neural networks seem like a good choice for modelling domain knowledge as a basis for guiding learning, including the process of learning to learn, so that agents can learn from just a few examples. A further question is whether neural networks are a better basis for implementing rules, rather than symbolically as with chunks & rules. A related question is how to implement fuzzy rules inspired by fuzzy logic.