Secure
The Secure module in the Great Wave AI Studio is designed to provide robust protection for your AI systems using two main types of guardrails: input guardrails and output guardrails. Here’s how to configure and use these features to ensure the security and integrity of your AI operations.
Input Guardrails
Purpose: Input guardrails are designed to protect the system from potentially harmful or unwanted inputs. They serve as a first line of defense against attacks or inappropriate data entering your AI environment.
Setting Input Guardrails:
Navigate to the Security Screen: Access the security settings in the Great Wave AI Studio.
Specify Unwanted Terms: Add terms or inputs that you want the system to block. This could include offensive language, sensitive data patterns, or any specific words that are irrelevant or harmful to your AI’s context.
Custom Response Message: You can also set a custom response message that will be triggered whenever these unwanted terms are detected. This message could inform the user that their input has been modified or rejected due to security policies.
Refresh Agent: After making changes, it is crucial to hit the ‘Refresh Agent’ button to ensure that the new settings are applied effectively.
Output Guardrails
Purpose: Output guardrails are focused on managing the risks associated with the outputs generated by the Large Language Models (LLM) themselves. This is particularly important for mitigating reputational risks or preventing the generation of inappropriate or harmful content.
Setting Output Guardrails:
Customize Guardrails: Define criteria that will filter the outputs of your LLM. This could include restrictions on certain topics, language use, or any specific content that should not be generated by the agent.
Custom Message: Set a custom message that will appear in lieu of blocked content. This message could explain why certain outputs are not displayed or inform the user about the guidelines governing the AI’s responses.
Refresh Agent: Like with input guardrails, you must click ‘Refresh Agent’ after every modification to activate the new configurations.
Finalizing Settings
Apply and Test: Once you’ve set both input and output guardrails, use the test functionality within the studio to ensure that everything operates as expected. Adjust the settings as needed based on the outcomes.
Continuous Monitoring and Adjustment: Regularly review and adjust the guardrails to stay aligned with evolving content standards and security requirements.
By effectively configuring and managing both input and output guardrails on the Great Wave AI Studio’s Security screen, you can enhance the safety and reliability of your AI applications, ensuring they operate within desired ethical and legal boundaries.
Last updated