Skip to content

Python Applications

If your AI application is developed in Python and not using LangChain, you can integrate PAIG with your application using the PAIG Python library. With this option you also have an option to customize the flow and decide when to invoke PAIG.

Install paig_client

Privacera's plugin library needs to be first installed. This can be done by running the following command:

Bash
pip install paig_client

Adding AI Application in PAIG

Go To PAIG

As a first step, you need to add your AI Application in PAIG and we will use that application to integrate PAIG. If you already added the Application to the PAIG, then you can skip this step.

To create a new application, go to Application > AI Application and click the CREATE APPLICATION button on the right top. This will open a dialog box where you can enter the details of the application.

 

Downloading Privacera Shield Configuration File

Go To PAIG

The Privacera Shield configuration file is required to initialize the Privacera Shield library. This file can be downloaded from the PAIG portal.

Navigate to Application -> AI Applications and select the application you want to download the configuration file for. Click on the DOWNLOAD APP CONFIG button from the right top to download the configuration file.

 

How to download Application Config

Sample Code

Here is a sample application you can try it out.

Create a sample Python file Create a file called something like sample_python_integration.py and copy the following code snippet into it.

Bash
vi sample_python_integration.py
sample_python_integration.py
from paig_client import client as paig_shield_client
from paig_client.model import ConversationType
import paig_client.exception
import uuid
from openai import OpenAI

# Set the OPENAI_API_KEY environment variable or set it here
openai_client = OpenAI()

paig_shield_client.setup(frameworks=[])

# Replace "testuser" with the user who is using the application. Or you can use the service username
user = "testuser"

# Generate a random UUID which will be used to bind a prompt with a reply
privacera_thread_id = str(uuid.uuid4())

try:
   with paig_shield_client.create_shield_context(username=user):
      prompt_text = "Who was the first President of USA and where did they live?"
      print(f"User Prompt: {prompt_text}")
      # Validate prompt with Privacera Shield
      updated_prompt_text = paig_shield_client.check_access(
         text=prompt_text,
         conversation_type=ConversationType.PROMPT,
         thread_id=privacera_thread_id
      )
      updated_prompt_text = updated_prompt_text[0].response_text
      print(f"User Prompt (After Privacera Shield): {updated_prompt_text}")
      if prompt_text != updated_prompt_text:
         print(f"Updated prompt text: {updated_prompt_text}")

      # Call LLM with updated prompt text
      PROMPT = f"""Use the following pieces of context to answer the question at the end.     
        {updated_prompt_text}    
        ANSWER:
        """

      response = openai_client.chat.completions.create(model="gpt-4", messages=[{"role": "user", "content": PROMPT}],
                                                       temperature=0)
      llm_response = response.choices[0].message.content
      print(f"LLM Response: {llm_response}")
      # Validate LLM response with Privacera Shield
      updated_reply_text = paig_shield_client.check_access(
         text=llm_response,
         conversation_type=ConversationType.REPLY,
         thread_id=privacera_thread_id
      )
      updated_reply_text = updated_reply_text[0].response_text
      print(f"LLM Response (After Privacera Shield): {updated_reply_text")
except paig_client.exception.AccessControlException as e:
   # If access is denied, then this exception will be thrown. You can handle it accordingly.
   print(f"AccessControlException: {e}")
  1. OpenAI API Key

    Don't forget to set OPENAI_API_KEY environment variable to your OpenAI API key.

Open AI Key

For OpenAI, you need to set the OPENAI_API_KEY environment variable or set it in the code.

OpenAI python package

Make sure have installed the OpenAI python package.

sample_python_integration.py
import json

import paig_client
from paig_client import client as paig_shield_client
from paig_client.model import ConversationType
import boto3

# If needed, pdate the below 2 variables with your model name and region
model_name = "amazon.titan-tg1-large"
region = "us-west-2"

bedrock_runtime = boto3.client(
    service_name="bedrock-runtime",
    region_name=region,
)
accept = "application/json"
contentType = "application/json"

paig_shield_client.setup(frameworks=[])

# Replace "testuser" with the user who is using the application. Or you can use the service username
user = "testuser"
try:
    with paig_shield_client.create_shield_context(username=user):
        prompt_text = "Who was the first President of USA and where did they live?"
        print(f"User Prompt: {prompt_text}")
        # Validate prompt with Privacera Shield
        updated_prompt_text = paig_shield_client.check_access(
            text=prompt_text,
            conversation_type=ConversationType.PROMPT
        )
        print(f"User Prompt (After Privacera Shield): {prompt_text}")
        if prompt_text != updated_prompt_text:
            print(f"Updated prompt text: {updated_prompt_text}")

        # Call LLM with updated prompt text
        PROMPT = f"""Use the following pieces of context to answer the question at the end.     
        {updated_prompt_text}    
        ANSWER:
        """

        prompt_config = {
            "inputText": PROMPT
        }

        body = json.dumps(prompt_config)
        response = bedrock_runtime.invoke_model(modelId=model_name, body=body, accept=accept,
                                                contentType=contentType)

        response_body = json.loads(response.get("body").read())
        results = response_body.get("results")
        for result in results:
            reply_text = result.get('outputText')
            # Validate LLM response with Privacera Shield
            update_reply_text = paig_shield_client.check_access(
                text=reply_text,
                conversation_type=ConversationType.REPLY
            )
            print(f"LLM Response (After Privacera Shield): {update_reply_text}")
except paig_client.exception.AccessControlException as e:
    # If access is denied, then this exception will be thrown. You can handle it accordingly.
    print(f"AccessControlException: {e}")

boto3 python package

Make sure you have installed the boto3 python package.

AWS IAM role access to Bedrock

Make sure you are running on AWS infrastructure which has access to Bedrock

Copy Privacera Shield configugration file to the privacera folder

Bash
mkdir -p privacera
#Copy the Privacera Shield Application configuration file to the privacera folder

Run the sample application

Bash
python sample_python_integration.py

Output
User Prompt: Who was first President of USA and where did they live?
LLM Response: The first President of the USA was George Washington. He lived in Mount Vernon, Virginia.
User Prompt (After Privacera Shield): Who is first President of USA and where did they live?
LLM Response (After Privacera Shield): The first President of the USA was <<PERSON>>. He lived in Mount Vernon, Virginia.

Code Breakup and explanation

In your AI Application you need to initialize and call the PAIG Library APIs before the prompts are sent to the LLM and after the response is received from the LLM. If you are using multi-chain then you need to call the PAIG Library APIs before and after each chain invocation. The following code snippet shows how to initialize the PAIG Library and call the APIs:

Importing the PAIG Libraries

Python
1
2
3
4
from paig_client import client as paig_shield_client
from paig_client.model import ConversationType
import paig_client.exception
import uuid

Initializing the PAIG Library

Call the setup method to initialize the Privacera Shield library. Since you are not using any frameworks, you can pass an empty list to the setup method.

Python
paig_shield_client.setup(frameworks=[])

Generate a random UUID which will be used to bind a prompt with a response

Python
privacera_thread_id = str(uuid.uuid4())
Checking Access Before Sending Prompt to LLM

Prompt User

If it is a chatbot application or an application where the user is prompted for input, then you need to pass the username of the user to the create_shield_context method. Privacera Shield will use this username to check access for the user. If it is a batch application, then you can pass the username for the service account, which could be any unique name e.g. document_summarizer. The policies will be checked against this username.

Python
try:
    with paig_shield_client.create_shield_context(username=user):
        # Validate prompt with Privacera Shield
        updated_prompt_text = paig_shield_client.check_access(
            text=prompt_text,
            conversation_type=ConversationType.PROMPT,
            thread_id=privacera_thread_id
        )
        updated_prompt_text = updated_prompt_text[0].response_text
        print(f"User Prompt (After Privacera Shield): {updated_prompt_text}")
except paig_client.exception.AccessControlException as e:
    # If access is denied, then this exception will be thrown. You can handle it accordingly.
    print(f"AccessControlException: {e}")

Checking Access After Receiving Response from LLM

Python
try:
    with paig_shield_client.create_shield_context(username=user):
        # Validate LLM response with Privacera Shield
        updated_reply_text = paig_shield_client.check_access(
            text=llm_response,
            conversation_type=ConversationType.REPLY,
            thread_id=privacera_thread_id
        )
        updated_reply_text = updated_reply_text[0].response_text
except paig_client.exception.AccessControlException as e:
    # If access is denied, then this exception will be thrown. You can handle it accordingly.
    print(f"AccessControlException: {e}")

The conversation type is used to differentiate between the prompt, RAG and the reply. Here are the valid values:

  • Prompt - ConversationType.PROMPT
  • RAG - ConversationType.RAG
  • Reply - ConversationType.REPLY

Privacera Shield Application configuration file

Create a folder called privacera in your application and copy the Privacera Shield Application configuration file


What Next?