Blog Post

Tuesday, April 16, 2024

Tech breakdown of VISS.AI

Introduction

VISS.AI is built on top of a multi-agent system that enhances task management through the strategic distribution of responsibilities among autonomous agents using advanced Large Language Models (LLMs) and integrated APIs. The core of VISS.AI is its ability to efficiently allocate and execute tasks using 'callables'—functions or methods designed to interact with specific API endpoints, developed through automated web scraping of API documentation.

Each agent in the system is capable of decision-making and executing tasks autonomously, which includes generating subtasks when dealing with complex operations. This not only distributes the workload effectively but also optimizes the problem-solving process across multiple agents.

The system's architecture allows for the customization of LLMs and operational parameters at various levels (system-wide, agent-specific, task-specific), enabling precise tuning of performance based on the nature of the tasks. Security measures within VISS.AI include symmetric non-deterministic encryption and a deterministic encryption index managed through the Google Cloud Key Management System, ensuring both data protection and query efficiency.


Table of Contents

1. Solving Tasks

2. Agents

3. Combination of Large Language Models

4. Callables and Automatic Integrations

5. Security


  1. Solving Tasks

In the multi-agent system of VISS.AI, solving a task is a sophisticated and detailed process that involves multiple components working in concert. The system is structured to manage tasks hierarchically, where each task can generate subtasks assigned to various agents. This methodical breakdown helps in handling complex problems by dividing them into more manageable tasks, each handled by specialized agents.

Task Initiation and Management

The process begins when a user submits a task to the system, which is placed in a task queue. The task manager allocates tasks to specific agents, which are autonomous LLM powered software entities equipped with decision-making capabilities.

Task Processing by Agents

Upon receiving a task, an agent starts with an action selection process. This critical step involves evaluating the current state of the task alongside possible actions to determine the most suitable action to take.

If the action involves executing a function or method (termed as "callable"), the agent proceeds to extract the necessary parameters for this callable.

Execution and Subtask Delegation

Following parameter extraction, the agent executes the callable to produce a result, which is then used to update the task's state. This iterative process continues until the task is resolved. However, if a task's complexity exceeds the processing capacity of a single agent, it may spawn one or more subtasks. These subtasks are delegated to other agents, allowing the system to distribute workload effectively and enhance problem-solving efficiency.

Each subtask is independently managed, yet their outcomes contribute collectively to the state of the main task. This modular approach ensures thoroughness and accuracy in task resolution.

Progress Tracking and Updates

Throughout the task's lifecycle, the system meticulously records every action performed, the results of all callables, and any state modifications. This comprehensive record-keeping plays a pivotal role in maintaining transparency and enables the system to provide consistent updates to the user regarding the task's progress.


  1. Agents

In the VISS.AI system, agents are pivotal in the execution and management of tasks. An agent within this context refers to a software entity that is autonomously intelligent, capable of performing specific actions and making decisions based on the information at hand. These agents are integral to the system's operation, with capabilities to interact with other agents, exchange information, and delegate tasks.

Role and Functionality of Agents

Each agent is tasked with specific assignments where they first engage in an action selection process. This process involves a critical evaluation of the task’s current state against the possible actions to determine the most appropriate course of action. An agent may use several techniques to determine the next action to take, such as creating a plan for multiple tasks ahead of time, evaluating previously taken actions, analyzing context provided by the user, and considering historical action selections.

Task Execution and Management

When an action necessitates the invocation of a function or method, in VISS.AI referred to as a "callable," the agent proceeds to parameter extraction. This is another area where language models play a crucial role, as they are instructed to deduce and extract necessary parameters from the provided context efficiently. Once parameters are determined, the agent executes the callable, which in turn generates a result used to update the task’s state. This cycle repeats until the task reaches completion.

Handling Complex Tasks Through Subtasks

Complex tasks that exceed the processing capabilities or available actions of a single agent are managed through the creation of subtasks. These subtasks are delegated to other agents, thus employing a distributed approach to task management. Each subtask is processed independently, yet the results are consolidated to update the parent task's state, ensuring cohesive progress and systematic completion.

Interaction with Users

Agents are also designed to facilitate direct interaction with users. They receive tasks from a user or other agents, process these tasks accordingly, and return the outcomes back to the user. This interaction is coordinated by a task manager, which not only manages the task queue but also ensures tasks are appropriately distributed among agents based on their specialization.

  1. Combination of Large Language Models

VISS.AI includes a strategic selection of Language Learning Models (LLMs), and their providers play a pivotal role in defining the system's operational effectiveness and adaptability. This flexibility in selection ensures that the system can be tailored to meet specific use cases, enhancing performance through a hierarchical structure that allows for precise customization at various operational levels.

System-Wide Default Settings

At the highest operational level, the system permits the specification of default LLMs and providers. These defaults are broadly applied across the system but can be overridden by more specific configurations at lower levels. The choice of default LLM and provider typically reflects the general requirements of the system and the nature of tasks it is designed to handle. This foundational setting ensures a baseline of efficiency and compatibility with the anticipated diversity of tasks.

Agent-Specific Customization

Moving down the hierarchy, at the agent level, individual LLMs and providers can be assigned to specific agents. This level of customization facilitates the tailoring of agent capabilities to their designated roles and responsibilities within the system. For example, an agent tasked with handling particularly complex problems might be equipped with a more sophisticated LLM and provider compared to its counterparts. This ensures that each agent operates with the most effective tools for its specific tasks greatly enhancing the entire system's performance.

Task-Specific LLM and Provider Selection

The flexibility extends further to the level of individual tasks within an agent’s available actions. Here, callables—functions or methods that an agent invokes to perform tasks—are associated with specific LLMs and providers. This association allows for overrides of the broader agent-level and system-level selections, providing a highly specialized approach to task execution.

Customizable Operational Instructions

In addition to the LLM and provider selections, the system also offers customizable instructions for key operations such as action selection, response generation, and parameter extraction. These instructions can be tailored at the system, agent, or callable levels, offering extensive control over the system’s behavior and further enhancing its responsiveness and accuracy in task execution. By default, the system selects more thorough and longer instructions for more complex tasks, and shorter, more direct instructions for less complex tasks.


  1. Callables and Automatic Integrations

Managing APIs and creating callables are critical processes that ensure seamless integration of various services and functionalities. In VISS.AI, integrations are facilitated by an automated system that leverages extensive API documentation available online, using a sophisticated method of web scraping to gather and analyze necessary data.

Gathering API Documentation

The initial phase in managing APIs involves the collection of API documentation through web scraping. This technique involves programmatically visiting web pages to extract useful information contained within them.

Analyzing Documentation

Once the API documentation is collected, the next step is to thoroughly analyze this documentation to extract essential details such as API endpoints, accepted parameters, and the data they return. This analysis is crucial as it helps in understanding the functionality and limitations of the APIs involved, which is imperative for the accurate creation and testing of callables.

Creation of Callables

The next step in the process involves the creation of callables. In the context of VISS.AI, a callable is defined as a function or method that an agent can invoke to perform a specific task. Each callable is intricately associated with a specific API endpoint and is designed to include all necessary parameters required for that endpoint. The creation of these callables is based directly on the information extracted from the API documentation.

Testing of Callables

The system can automatically validate callables by generating test input for different use-cases and attempting to run the system, however, human validation of all callables is always performed before a callable is used by users. During testing, a callable may be directly modified by the system as the system discovers inconsistencies in the callable as generated directly from the API documentation.


  1. Security

Non-deterministic Encryption

We utilize symmetric non-deterministic encryption to secure sensitive data. This encryption methodology ensures that each piece of data is encrypted with a unique ciphertext, even when the same plaintext is encrypted multiple times. Such a property significantly enhances the security of the encrypted data by obfuscating patterns and thwarting statistical analysis attempts by potential attackers.

Cloud key encryption

To securely store encryption keys, we employ the Google Cloud Key Management System (KMS). Google Cloud KMS offers advanced security features, including hardware-based key protection, stringent access controls, and comprehensive audit logging. These features ensure that encryption keys are well-protected and accessible only to duly authorized entities, thereby fortifying our data security infrastructure.

Efficient Querying of Data

To facilitate efficient querying within our encrypted database, we have implemented a deterministic encryption index. This index obviates the need for an initialization vector which is typically required in non-deterministic encryption schemes. Deterministic encryption, in contrast to its non-deterministic counterpart, generates identical ciphertext from the same plaintext input. This consistency allows for equality comparisons on encrypted data without necessitating decryption, thereby enhancing query performance.

SHA256 and Deterministic Encryption

The construction of the deterministic encryption index leverages the SHA256 hashing algorithm. SHA256 is a cryptographic hash function that delivers a 256-bit fixed-size output regardless of input size. It exhibits collision resistance, making it computationally infeasible to find two distinct inputs that yield the same output hash. Applying SHA256 to plaintext data before encryption provides a secure and deterministic representation of the data for indexing purposes.

While deterministic encryption facilitates efficient data querying, it also potentially exposes data characteristics, particularly when the plaintext domain is limited or if the data distribution is previously known to an attacker. Consequently, the adoption of deterministic encryption necessitates careful consideration regarding the use case and the sensitivity of the data to balance query efficiency against potential security vulnerabilities.

© 2024 VISSAI AB. All rights reserved.