Skip to content

LangChain

Privacera’s integration via LangChain is designed to be nearly touch-free. This is facilitated through the use of Privacera's Shield library, which transparently intercepts calls within LangChain, enforcing policies on the original prompt as well as whenever prompts are altered, whether due to Chains or Retrieval Augmented Generation (RAG). The objective is to ensure that policy adherence is seamlessly maintained across all interactions within the application, irrespective of prompt modifications.

Here are the Quick Start options for trying out the integrations with LangChain.

  1. Google Colab Notebook: You can experiment with the LangChain integration using a Google Colab notebook. This option only requires a Google account. Google Colab provides a free Jupyter notebook environment where you can run the Privacera SecureChat application.

  2. Sample Application: You can download the sample application and run it in your local environment. This option requires Python to be installed locally.

For both options, you'll need to create a Privacera Shield Application in PAIG and download the corresponding configuration file.

Adding AI Application in PAIG

Go To PAIG

As a first step, you need to add your AI Application in PAIG and we will use that application to integrate PAIG. If you already added the Application to the PAIG, then you can skip this step.

To create a new application, go to Application > AI Application and click the CREATE APPLICATION button on the right top. This will open a dialog box where you can enter the details of the application.

 

Downloading Privacera Shield Configuration File

Go To PAIG

The Privacera Shield configuration file is required to initialize the Privacera Shield library. This file can be downloaded from the PAIG portal.

Navigate to Application -> AI Applications and select the application you want to download the configuration file for. Click on the DOWNLOAD APP CONFIG button from the right top to download the configuration file.

 

How to download Application Config

Using Google Colab Notebook

After you have downloaded the Privacera Shield configuration file, you can go to Open In ColabGoogle Colab NoteBook

Pre-requisite

  1. You need to authorize the Google Colab to access GitHub

Using Python Sample Application**

The following are the prerequisites for trying out with LangChain

  • LangChain needs python 3.11 and above

Sample Code

If you like to try first and then understand the code later, then here is a sample application you can try it out quickly. The explanation of the code is provided here.

Supported versions

Library Versions
Python 3.11+
langchain-community 0.0.17
langchain-openai 0.0.5
langchain 0.1.5

You can download sample_langchain_integration.py and requirements.txt for OpenAI

sample_langchain_integration.py
import os
import paig_client
from paig_client import client as paig_shield_client
from langchain_openai import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain

api_key = os.getenv("OPENAI_API_KEY")  # (1)

# Initialize Privacera Shield
paig_shield_client.setup(frameworks=["langchain"])

llm = OpenAI(openai_api_key=api_key)
template = """Question: {question}

Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])

# Let's assume the user is "testuser"
user = "testuser"
prompt_text = "Who was the first President of USA and where did they live?"
print(f"Prompt: {prompt_text}")
print()

llm_chain = LLMChain(prompt=prompt, llm=llm)
try:
    with paig_shield_client.create_shield_context(username=user):
        response = llm_chain.invoke(prompt_text)
        print(f"LLM Response: {response.get('text')}")
except paig_client.exception.AccessControlException as e:
    # If access is denied, then this exception will be thrown. You can handle it accordingly.
    print(f"AccessControlException: {e}")
  1. OpenAI API Key

    Don't forget to set OPENAI_API_KEY environment variable to your OpenAI API key.

Open AI Key

For OpenAI, you need to set the OPENAI_API_KEY environment variable or set it in the code.

requirements.txt
1
2
3
4
langchain-community==0.0.17
langchain-openai==0.0.5
langchain==0.1.5
paig_client
sample_langchain_integration.py
import os

import paig_client
from paig_client import client as paig_shield_client
from langchain.llms.bedrock import Bedrock
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain

# Inititialize Privacera Shield
paig_shield_client.setup(frameworks=["langchain"])

model_name = "amazon.titan-tg1-large"
region = "us-west-2"
llm = Bedrock(model_id=model_name, region_name=region)

template = """Question: {question}

Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])

# Let's assume the user is "testuser"
user = "testuser"
prompt_text = "Who was the first President of USA and where did they live?"
llm_chain = LLMChain(prompt=prompt, llm=llm)
try:
    with paig_shield_client.create_shield_context(username=user):
        response = llm_chain.run(prompt_text)
        print(f"LLM Response: {response}")
except paig_client.exception.AccessControlException as e:
    # If access is denied, then this exception will be thrown. You can handle it accordingly.
    print(f"AccessControlException: {e}")

Dependent python package

Make sure you have installed the dependent python packages like boto3 and langchain

AWS IAM role access to Bedrock

Make sure you are running on AWS infrastructure which has access to Bedrock


It is recommended to use Python's virtual environment to run the sample application. The following steps show how to create a virtual environment and run the sample application. Create a folder where where you want to run the sample. E.g.

Bash
mkdir -p ~/paig-sample
cd ~/paig-sample

Create a virtual environment and activate it

Bash
python3 -m venv venv
source venv/bin/activate

Install the required python packages

Bash
pip3 install -r requirements.txt

Copy Privacera Shield configuration file to the privacera folder

Bash
mkdir -p privacera
Copy the Privacera Shield Application configuration file to the privacera folder. It is okay not rename the config file name. E.g.

Bash
cp ~/Downloads/privacera-shield-SecureChat-config.json privacera/

If you are using OpenAI, then you need to set the OPENAI_API_KEY environment variable or set it in the code.

Bash
export OPENAI_API_KEY="<your-openai-api-key>"

Run the sample application

Bash
python3 sample_langchain_integration.py

Check the security audits Now go to Security Audits to check the prompts and response for the testuser.

Code Breakup and explanation

In your AI Application you need to initialize Privacera Shield library. Once it is initialized, it will automatically embed itself within the LangChain framework and intercept all requests made by user as well as the iterations betweens agents and chains. The following code snippet shows how to initialize the PAIG Library.

Copy the Privacera Shield Application configuration file

Create a folder called privacera in your application and copy the Privacera Shield Application configuration file

Install paig_client

Privacera's Shield library needs to be first installed. This can be done by running the following command:

Bash
pip install paig_client

Importing the PAIG Libraries

Add the following imports to your application

Python
import paig_client
from paig_client import client as paig_shield_client

Initializing the PAIG Library

Call the setup method to initialize the Privacera Shield library.

Python
paig_shield_client.setup(frameworks=["langchain"])

Setting Privacera Shield Context

Before calling Langchain, the Privacera Shield context needs to be set. This is primarily to set the user context

Prompt User

If it is a chatbot application or an application where the user is prompted for input, then you need to pass the username of the user to the create_shield_context method. Privacera Shield will use this username to check access for the user. If it is a batch application, then you can pass the username for the service account, which could be any unique name e.g. document_summarizer. The policies will be checked against this username.

Python
1
2
3
4
5
6
7
try:
    with paig_shield_client.create_shield_context(username=user):
        response = llm_chain.invoke(prompt_text)
        print(f"LLM Response: {response.get('text')}")
except paig_client.exception.AccessControlException as e:
    # If access is denied, then this exception will be thrown. You can handle it accordingly.
    print(f"AccessControlException: {e}")

What Next?