If your AI application is developed in Python and not using LangChain, you can integrate PAIG with your application using the PAIG Python library. With this option you also have an option to customize the flow and decide when to invoke PAIG.
As a first step, you need to add your AI Application in PAIG and we will use that application to integrate PAIG. If you already added the Application to the PAIG, then you can skip this step.
To create a new application, go to Application > AI Application and click the CREATE APPLICATION button on the right top. This will open a dialog box where you can enter the details of the application.
The Privacera Shield configuration file is required to initialize the Privacera Shield library. This file can be downloaded from the PAIG portal.
Navigate to Application -> AI Applications and select the application you want to download the configuration file for. Click on the DOWNLOAD APP CONFIG button from the right top to download the configuration file.
frompaig_clientimportclientaspaig_shield_clientfrompaig_client.modelimportConversationTypeimportpaig_client.exceptionimportuuidfromopenaiimportOpenAI# Set the OPENAI_API_KEY environment variable or set it hereopenai_client=OpenAI()paig_shield_client.setup(frameworks=[])# Replace "testuser" with the user who is using the application. Or you can use the service usernameuser="testuser"# Generate a random UUID which will be used to bind a prompt with a replyprivacera_thread_id=str(uuid.uuid4())try:withpaig_shield_client.create_shield_context(username=user):prompt_text="Who was the first President of USA and where did they live?"print(f"User Prompt: {prompt_text}")# Validate prompt with Privacera Shieldupdated_prompt_text=paig_shield_client.check_access(text=prompt_text,conversation_type=ConversationType.PROMPT,thread_id=privacera_thread_id)updated_prompt_text=updated_prompt_text[0].response_textprint(f"User Prompt (After Privacera Shield): {updated_prompt_text}")ifprompt_text!=updated_prompt_text:print(f"Updated prompt text: {updated_prompt_text}")# Call LLM with updated prompt textPROMPT=f"""Use the following pieces of context to answer the question at the end. {updated_prompt_text} ANSWER: """response=openai_client.chat.completions.create(model="gpt-4",messages=[{"role":"user","content":PROMPT}],temperature=0)llm_response=response.choices[0].message.contentprint(f"LLM Response: {llm_response}")# Validate LLM response with Privacera Shieldupdated_reply_text=paig_shield_client.check_access(text=llm_response,conversation_type=ConversationType.REPLY,thread_id=privacera_thread_id)updated_reply_text=updated_reply_text[0].response_textprint(f"LLM Response (After Privacera Shield): {updated_reply_text")exceptpaig_client.exception.AccessControlExceptionase:# If access is denied, then this exception will be thrown. You can handle it accordingly.print(f"AccessControlException: {e}")
OpenAI API Key
Don't forget to set OPENAI_API_KEY environment variable to your OpenAI API key.
Open AI Key
For OpenAI, you need to set the OPENAI_API_KEY environment variable or set it in the code.
OpenAI python package
Make sure have installed the OpenAI python package.
importjsonimportpaig_clientfrompaig_clientimportclientaspaig_shield_clientfrompaig_client.modelimportConversationTypeimportboto3# If needed, pdate the below 2 variables with your model name and regionmodel_name="amazon.titan-tg1-large"region="us-west-2"bedrock_runtime=boto3.client(service_name="bedrock-runtime",region_name=region,)accept="application/json"contentType="application/json"paig_shield_client.setup(frameworks=[])# Replace "testuser" with the user who is using the application. Or you can use the service usernameuser="testuser"try:withpaig_shield_client.create_shield_context(username=user):prompt_text="Who was the first President of USA and where did they live?"print(f"User Prompt: {prompt_text}")# Validate prompt with Privacera Shieldupdated_prompt_text=paig_shield_client.check_access(text=prompt_text,conversation_type=ConversationType.PROMPT)print(f"User Prompt (After Privacera Shield): {prompt_text}")ifprompt_text!=updated_prompt_text:print(f"Updated prompt text: {updated_prompt_text}")# Call LLM with updated prompt textPROMPT=f"""Use the following pieces of context to answer the question at the end. {updated_prompt_text} ANSWER: """prompt_config={"inputText":PROMPT}body=json.dumps(prompt_config)response=bedrock_runtime.invoke_model(modelId=model_name,body=body,accept=accept,contentType=contentType)response_body=json.loads(response.get("body").read())results=response_body.get("results")forresultinresults:reply_text=result.get('outputText')# Validate LLM response with Privacera Shieldupdate_reply_text=paig_shield_client.check_access(text=reply_text,conversation_type=ConversationType.REPLY)print(f"LLM Response (After Privacera Shield): {update_reply_text}")exceptpaig_client.exception.AccessControlExceptionase:# If access is denied, then this exception will be thrown. You can handle it accordingly.print(f"AccessControlException: {e}")
boto3 python package
Make sure you have installed the boto3 python package.
AWS IAM role access to Bedrock
Make sure you are running on AWS infrastructure which has access to Bedrock
Copy Privacera Shield configugration file to the privacera folder
User Prompt: Who was first President of USA and where did they live?
LLM Response: The first President of the USA was George Washington. He lived in Mount Vernon, Virginia.
User Prompt (After Privacera Shield): Who is first President of USA and where did they live?
LLM Response (After Privacera Shield): The first President of the USA was <<PERSON>>. He lived in Mount Vernon, Virginia.
In your AI Application you need to initialize and call the PAIG Library APIs before the prompts are sent to the LLM and after the response is received from the LLM. If you are using multi-chain then you need to call the PAIG Library APIs before and after each chain invocation. The following code snippet shows how to initialize the PAIG Library and call the APIs:
Call the setup method to initialize the Privacera Shield library. Since you are not using any frameworks, you can pass an empty list to the setup method.
If it is a chatbot application or an application where the user is prompted for input, then you need to pass the username of the user to the create_shield_context method. Privacera Shield will use this username to check access for the user. If it is a batch application, then you can pass the username for the service account, which could be any unique name e.g. document_summarizer. The policies will be checked against this username.
try:withpaig_shield_client.create_shield_context(username=user):# Validate prompt with Privacera Shieldupdated_prompt_text=paig_shield_client.check_access(text=prompt_text,conversation_type=ConversationType.PROMPT,thread_id=privacera_thread_id)updated_prompt_text=updated_prompt_text[0].response_textprint(f"User Prompt (After Privacera Shield): {updated_prompt_text}")exceptpaig_client.exception.AccessControlExceptionase:# If access is denied, then this exception will be thrown. You can handle it accordingly.print(f"AccessControlException: {e}")
try:withpaig_shield_client.create_shield_context(username=user):# Validate LLM response with Privacera Shieldupdated_reply_text=paig_shield_client.check_access(text=llm_response,conversation_type=ConversationType.REPLY,thread_id=privacera_thread_id)updated_reply_text=updated_reply_text[0].response_textexceptpaig_client.exception.AccessControlExceptionase:# If access is denied, then this exception will be thrown. You can handle it accordingly.print(f"AccessControlException: {e}")
The conversation type is used to differentiate between the prompt, RAG and the reply. Here are the valid values:
Prompt - ConversationType.PROMPT
RAG - ConversationType.RAG
Reply - ConversationType.REPLY
Privacera Shield Application configuration file
Create a folder called privacera in your application and copy the Privacera Shield Application configuration file