Skip to content

Python Applications

If your AI application is developed in Python and not using LangChain, you can integrate PAIG with your application using the PAIG Python library. With this option you also have an option to customize the flow and decide when to invoke PAIG.

Install paig_client

PAIG client library needs to be first installed. This can be done by running the following command:

Bash
pip install paig_client

Adding AI Application in PAIG

As a first step, you need to add your AI Application in PAIG and we will use that application to integrate PAIG. If you already added the Application to the PAIG, then you can skip this step.

To create a new application, go to Paig Navigator > AI Application and click the CREATE APPLICATION button on the right top. This will open a dialog box where you can enter the details of the application.

 

How to add AI Application

Generate AI application API key

The AI Application API key needs to be exported as "PAIG_APP_API_KEY" to initialize the Privacera Shield library. This API key can be generated from the PAIG portal.

Navigate to Paig Navigator -> AI Applications, and select the application for which you want to generate the api key. In the API KEYS tab, click the GENERATE API KEY button in the top-right corner to generate an API key. Provide a Name and Description, along with a Expiry, or select the Max Validity (1 year) checkbox to set default expiry.

Once you generate the API key you can view it clicking on eye icon. Make sure to copy and store the key securely.

AI Application API key

API Key Generation

Once the Api Key is generated, it will not be displayed again. Ensure you copy and securely store it immediately after generation.

Set the PAIG API Key

To initialize the PAIG Shield library in your AI application, export the PAIG_APP_API_KEY as an environment variable.

Bash
export PAIG_APP_API_KEY=<API_KEY>

Alternative Method: Pass API Key in Code

If you prefer not to use environment variables, you can directly pass the API key when initializing the library:

Python
paig_shield_client.setup(frameworks=[], application_config_api_key="<API_KEY>")
For a complete code example showing where to place this, locate the setup() method in the provided sample code section below.

Precedence Rule

If the PAIG_APP_API_KEY is set both as an environment variable and in the code, the key specified in the code will take priority.

Sample Code

Here is a sample application you can try it out.

Create a sample Python file Create a file called something like sample_python_integration.py and copy the following code snippet into it.

Bash
vi sample_python_integration.py
sample_python_integration.py
from paig_client import client as paig_shield_client
from paig_client.model import ConversationType
import paig_client.exception
import uuid
from openai import OpenAI

# Set the OPENAI_API_KEY environment variable or set it here
openai_client = OpenAI()

# Initialize PAIG Shield 
# Set the PAIG_APP_API_KEY environment variable or set it here in the setup method
paig_shield_client.setup(frameworks=[])

# Replace "testuser" with the user who is using the application. Or you can use the service username
user = "testuser"

# Generate a random UUID which will be used to bind a prompt with a reply
paig_thread_id = str(uuid.uuid4())

try:
   with paig_shield_client.create_shield_context(username=user):
      prompt_text = "Who was the first President of USA and where did they live?"
      print(f"User Prompt: {prompt_text}")
      # Validate prompt with paig Shield
      updated_prompt_text = paig_shield_client.check_access(
         text=prompt_text,
         conversation_type=ConversationType.PROMPT,
         thread_id=paig_thread_id
      )
      updated_prompt_text = updated_prompt_text[0].response_text
      print(f"User Prompt (After PAIG Shield): {updated_prompt_text}")
      if prompt_text != updated_prompt_text:
         print(f"Updated prompt text: {updated_prompt_text}")

      # Call LLM with updated prompt text
      PROMPT = f"""Use the following pieces of context to answer the question at the end.     
        {updated_prompt_text}    
        ANSWER:
        """

      response = openai_client.chat.completions.create(model="gpt-4", messages=[{"role": "user", "content": PROMPT}],
                                                       temperature=0)
      llm_response = response.choices[0].message.content
      print(f"LLM Response: {llm_response}")
      # Validate LLM response with PAIG Shield
      updated_reply_text = paig_shield_client.check_access(
         text=llm_response,
         conversation_type=ConversationType.REPLY,
         thread_id=paig_thread_id
      )
      updated_reply_text = updated_reply_text[0].response_text
      print(f"LLM Response (After PAIG Shield): {updated_reply_text}")
except paig_client.exception.AccessControlException as e:
   # If access is denied, then this exception will be thrown. You can handle it accordingly.
   print(f"AccessControlException: {e}")
  1. OpenAI API Key

    Don't forget to set OPENAI_API_KEY environment variable to your OpenAI API key.

Open AI Key

For OpenAI, you need to set the OPENAI_API_KEY environment variable or set it in the code.

OpenAI python package

Make sure have installed the OpenAI python package.

sample_python_integration.py
import json

import paig_client
from paig_client import client as paig_shield_client
from paig_client.model import ConversationType
import boto3

# If needed, pdate the below 2 variables with your model name and region
model_name = "amazon.titan-tg1-large"
region = "us-west-2"

bedrock_runtime = boto3.client(
    service_name="bedrock-runtime",
    region_name=region,
)
accept = "application/json"
contentType = "application/json"

# Initialize PAIG Shield  
# Set the PAIG_APP_API_KEY environment variable or set it here in the setup method
paig_shield_client.setup(frameworks=[])

# Replace "testuser" with the user who is using the application. Or you can use the service username
user = "testuser"
try:
    with paig_shield_client.create_shield_context(username=user):
        prompt_text = "Who was the first President of USA and where did they live?"
        print(f"User Prompt: {prompt_text}")
        # Validate prompt with PAIG Shield
        updated_prompt_text = paig_shield_client.check_access(
            text=prompt_text,
            conversation_type=ConversationType.PROMPT
        )
        print(f"User Prompt (After PAIG Shield): {prompt_text}")
        if prompt_text != updated_prompt_text:
            print(f"Updated prompt text: {updated_prompt_text}")

        # Call LLM with updated prompt text
        PROMPT = f"""Use the following pieces of context to answer the question at the end.     
        {updated_prompt_text}    
        ANSWER:
        """

        prompt_config = {
            "inputText": PROMPT
        }

        body = json.dumps(prompt_config)
        response = bedrock_runtime.invoke_model(modelId=model_name, body=body, accept=accept,
                                                contentType=contentType)

        response_body = json.loads(response.get("body").read())
        results = response_body.get("results")
        for result in results:
            reply_text = result.get('outputText')
            # Validate LLM response with PAIG Shield
            update_reply_text = paig_shield_client.check_access(
                text=reply_text,
                conversation_type=ConversationType.REPLY
            )
            print(f"LLM Response (After PAIG Shield): {update_reply_text}")
except paig_client.exception.AccessControlException as e:
    # If access is denied, then this exception will be thrown. You can handle it accordingly.
    print(f"AccessControlException: {e}")

boto3 python package

Make sure you have installed the boto3 python package.

AWS IAM role access to Bedrock

Make sure you are running on AWS infrastructure which has access to Bedrock

Run the sample application

Bash
python sample_python_integration.py

Output
User Prompt: Who was first President of USA and where did they live?
LLM Response: The first President of the USA was George Washington. He lived in Mount Vernon, Virginia.
User Prompt (After PAIG Shield): Who is first President of USA and where did they live?
LLM Response (After PAIG Shield): The first President of the USA was <<PERSON>>. He lived in Mount Vernon, Virginia.

Code Breakup and explanation

In your AI Application you need to initialize and call the PAIG Library APIs before the prompts are sent to the LLM and after the response is received from the LLM. If you are using multi-chain then you need to call the PAIG Library APIs before and after each chain invocation. The following code snippet shows how to initialize the PAIG Library and call the APIs:

Importing the PAIG Libraries

Python
1
2
3
4
from paig_client import client as paig_shield_client
from paig_client.model import ConversationType
import paig_client.exception
import uuid

Initializing the PAIG Library

Call the setup method to initialize the PAIG Shield library. Since you are not using any frameworks, you can pass an empty list to the setup method.

Python
paig_shield_client.setup(frameworks=[])

Generate a random UUID which will be used to bind a prompt with a response

Python
paig_thread_id = str(uuid.uuid4())
Checking Access Before Sending Prompt to LLM

Prompt User

If it is a chatbot application or an application where the user is prompted for input, then you need to pass the username of the user to the create_shield_context method. PAIG Shield will use this username to check access for the user. If it is a batch application, then you can pass the username for the service account, which could be any unique name e.g. document_summarizer. The policies will be checked against this username.

Python
try:
    with paig_shield_client.create_shield_context(username=user):
        # Validate prompt with PAIG Shield
        updated_prompt_text = paig_shield_client.check_access(
            text=prompt_text,
            conversation_type=ConversationType.PROMPT,
            thread_id=paig_thread_id
        )
        updated_prompt_text = updated_prompt_text[0].response_text
        print(f"User Prompt (After PAIG Shield): {updated_prompt_text}")
except paig_client.exception.AccessControlException as e:
    # If access is denied, then this exception will be thrown. You can handle it accordingly.
    print(f"AccessControlException: {e}")

Checking Access After Receiving Response from LLM

Python
try:
    with paig_shield_client.create_shield_context(username=user):
        # Validate LLM response with PAIG Shield
        updated_reply_text = paig_shield_client.check_access(
            text=llm_response,
            conversation_type=ConversationType.REPLY,
            thread_id=paig_thread_id
        )
        updated_reply_text = updated_reply_text[0].response_text
except paig_client.exception.AccessControlException as e:
    # If access is denied, then this exception will be thrown. You can handle it accordingly.
    print(f"AccessControlException: {e}")

The conversation type is used to differentiate between the prompt, RAG and the reply. Here are the valid values:

  • Prompt - ConversationType.PROMPT
  • RAG - ConversationType.RAG
  • Reply - ConversationType.REPLY

What Next?