A Multi-agent network involves multiple autonomous agents working together to achieve a common goal. Each agent in the network is designed to perform specific tasks, communicate with other agents, and make decisions based on real-time data. In the context of enterprises, an AI-powered multi-agent network can integrate various tools and processes, streamlining operations, enhancing decision-making, and ultimately improving client service.
The ReACT (Reasoning and Acting) template is a type of Artificial Intelligence template designed to enhance the decision-making and problem-solving capabilities of multi-agent systems. It combines reasoning (thought processes to understand and interpret information) with acting (taking actions based on reasoning outcomes). This integration allows ReACT agents to handle complex tasks that require both cognitive and operational skills.
Purple Fabric enables agentic workflows with multi-agent systems. Each expert agent in an agentic system can be powered by the LLM best suited to that agent from a quality, speed, and cost perspective.
Users must have the Gen AI User policy to create a Multi-Agent Network in Purple Fabric.
This guide will walk you through the steps to build a multi-agent network using Automation Agents in Purple Fabric.
- Create an asset
- Select a prompt template
- Select a model and set model configuration
- Provide the system instruction, action and examples
- Run Model and view results
- Publish the asset
Step 1: Create an asset
- Head to the Gen AI Studio module and click Create Asset.
- In the Create Gen AI asset window that appears, enter Asset Name.
- In Type, choose Automation Agent and click Create.
- Optional: Enter Description and upload a Display image.
Step 2: Select a prompt template
- On the Gen AI Asset creation page that appears, choose ReAct Prompt template.
Step 3: Select a model and set model configurations
Select a Model
- Select a model from the available List, considering model size, capability, and performance. Refer to the table to choose the appropriate model based on your requirements.
LLM Model | Model Input – As per Platform configured | Model Output | Input Context Window(Tokens) | Output Generation Size(Tokens) | Capability and Suitable For |
Azure OpenAI GPT 3.5 Turbo 4K | Supports Text | Text | 4096 | 4096 | Ideal for applications requiring efficient chat responses, code generation, and traditional text completion tasks. |
Azure OpenAI GPT 3.5 Turbo 16K | Supports Text | Text | 16384 | 4096 | Ideal for applications requiring efficient chat responses, code generation, and traditional text completion tasks. |
Azure OpenAI GPT – 4o | Supports Text | Text | 128,000 | 16,384 | GPT-4o demonstrates strong performance on text-based tasks like knowledge-based Q&A, text summarization, and language generation in over 50 languages. Also, useful in complex problem-solving scenarios, advanced reasoning, and generating detailed outputs. Recommended for ReAct |
Azure OpenAI GPT – 4o mini | Supports Text | Text | 128,000 | 16,384 | A model similar to GPT-4o but with lower cost and slightly less accuracy compared to GPT-4o. Recommended for ReAct |
Bedrock Claud3 Haiku 200k | Supports Text + Image | Text | 200,000 | 4096 | The Anthropic Claude 3 Haiku model is a fast and compact version of the Claude 3 family of large language models. Claude 3 Haiku demonstrates strong multimodal capabilities, adeptly processing diverse types of data including text in multiple languages and various visual formats. Its expanded language support and sophisticated vision analysis skills enhance its overall versatility and problem-solving abilities across a wide range of applications. |
Bedrock Claude3 Sonnet 200k | Supports Text + Image | Text | 200,000 | 4096 | Comparatively more performant than Haiku, Claude 3 Sonnet combines robust language processing capabilities with advanced visual analysis features. Its strengths in multilingual understanding, reasoning, coding proficiency, and image interpretation make it a versatile tool for various applications across industries |
Set Model Configuration
- Click and then set the following tuning parameters to optimize the model’s performance. For more information, see Advance Configuration.
Step 4: Provide the system instructions, actions and Examples
Provide System Instruction
A system instruction refers to a command or directive provided to the model to modify its behavior or output in a specific way. For example, a system instruction might instruct the model to classify data or extract data in a specific format etc.
- Enter the following System instructions by crafting a prompt that guides the agent with the automation task.
Add Actions
- In the Actions section, click Add.
- In the Actions window that appears, use the search bar to find the required agents/tools.
- Sarah – Business Assurance QA
- James – Business Assurance Writer
- Sarah – Business Assurance QA
- On the Actions window, click against the respective agents, and then click X to close the Actions window.
Manage Actions
- In the Actions section, provide the detailed description against each added Actions.
Note: Providing the detailed description helps the LLM model to understand the context better against the actions.
- Use enable options to activate the Actions.
- Click (ellipsis icon) and select Delete if you wish to remove the Agent/tool.
Add Parameters
- In the Parameter section, click Add.
- Enter the following information.
- Name: Enter the Name of the input parameter.
- Type: Choose File as the data type.
- Description: Enter the Description for each of the input parameters. The description of the parameters ensures accurate interpretation and execution of tasks by the GenAI Asset. Be as specific as possible.
- Click against the input parameter to access settings and add input field settings.
- Choose the required file formats (PDF, JPEG, JPG) from the drop-down menu.
- Select a chunk strategy for file inputs. The chunking strategy can be applied in Page, Words, and Block.
- Click Save to proceed.
Define Output Schema
- In the Output section, click Add to define the output schema for the Asset.
- Enter the Variable Name, Type and Description for each of the output variables. Supported types include Text, number, Boolean, DateTime, Signature and Table.
Provide Examples
Examples help the content creation task at hand to enhance the agent’s understanding and response accuracy. These examples help the agent learn and improve over time.
- In the Examples section, click Add.
- In the Examples section,update the following information:
- Question: Initial query raised.
- Thought: Considerations or analysis related to the question.
- Action: Steps planned or taken in response to the question.
- Action Input: Inputs or resources needed to carry out the action.
- Observation: Monitoring or assessment of the action’s effectiveness.
- Thought: Further reflections or insights based on the observation.
- Final Answer: Conclusive response or resolution to the initial question.
Step 5: Run the model and view results
- In the Debug and Prompt section, provide the user input/query.
- Click Run to get the output in the required format.
- Review the generated output.
- Click Reference if you wish to view the reference of the output.
- Select the respective field information to view its reference.
- Click Reference if you wish to view the reference of the output.
Note: If you are not satisfied with the results then, try modifying the System Instructions and the description of the output variables. You can also try changing to a different model.
View Trace
- If you wish to view the traces of the prompt and the result, click View trace.
- In the Trance window that appears, review the trace.
Step 6: Publish the asset
- Click Publish if the desired accuracy and performance for extracting the data has been achieved.
- In the Asset Details page that appears, write a description and upload an image for a visual representation.
- Click Publish and then the status of the Asset changes to Published. The Asset can be accessed in the Gen AI Studio.
Note: Once the Asset is published, you can download the API and its documentation. The API can be consumed independently or used within a specific Use case. If you wish to consume this Asset via API, see Consume an Asset via API.
You can also consume this automation Asset in the Asset Monitor module. For more information, see Consume an Asset via Create Transaction.