The 5 steps to consume Amazon Bedrock and Azure OpenAI Generative AI services from SAP BTP using SAP AI Core

Hello. In today’s blog, I will discuss why and how we can use an external Generative AI service from SAP BTP using AI Core.

🤔 What is AI Core, and what is the difference with Data Intelligence Cloud

🎯 SAP AI Core handles the execution and operations of our AI assets in BTP.

🌐 SAP Data Intelligence Cloud is for data scientists and IT teams to collaboratively design, deploy, and manage machine-learning models with SAP built-in tools for data governance, management, and transparency.

In a nutshell, in BTP, we use Data Intelligence Cloud if we want to build our models and AI Core to run a model we push from our GitHub Repo or consume an external model through an API.

There is a third way to use Phyton in the SAP ecosystem, with SAP HANA or HANA Cloud; PAL, and APL libraries. if we want to use the power of HANA instead of the power of Docker/Kubernetes, this is not the blog; check out this Repository at this link

 

AI Core has been around for some months, and running a Language Model on BTP has been described in previous blog posts here and here and in this tutorial.

Now, I will describe how to consume an external LLM service from BTP using AI Core.

 

 

Key Elements of BTP AI Core and Launchpad from the image

Summary of Elements and Usage
What Why
Git repository For storing training and serving workflows and templates
Amazon S3 For storage of input and output artifacts, such as training data and models
Docker Repository For custom Docker images referenced in the templates
Docker / Kubernetes The cluster for AI pipelines.
Amazon S3 / EC2 As of today, AI Core runs on AWS infrastructure only, with docker images tied to EC2 instances from P and G families.
AI API

For managing our artifacts and workflows (such as training scripts, data, models, and model servers) across multiple runtimes

The AI API can also integrate other machine learning platforms, engines, or runtimes into the AI ecosystem.

LangChain Language Model Application Development / Agents
SAP AI Launchpad SAP AI Launchpad to manage AI use cases (scenarios) across multiple instances of AI runtimes (such as SAP AI Core)
Amazon Bedrock /Azure OpenAI Models hosted on Hyperscalers accessed through API
SAP Graph
SAP Graph accessing from multiple line of business (LOB) systems, SAP S/4HANA Cloud, SAP Success Factors, SAP S/4HANA
SAP Integration Suite Access APIs from SAP or 3rd party systems
ChromaDB Vector DB for Embeddings (RAG)
SAP Integration Suite Access APIs from SAP or 3rd party systems

If you want to see other descriptions of AI Core objects, check out this blog

https://mlizrraotg0z.i.optimole.com/cb:LJPV.22ab2/w:auto/h:auto/q:mauto/f:best/https://careerasylum.com/wp-content/uploads/2016/02/sapnwabline_885687.png

Step 1 Prerequisites. BTP Setup. AI Core and Launchpad

  • SAP BTP Subaccount
  • SAP AI Core instance
  • SAP AI Launchpad subscription (recommended)
  • SAP BTP, Cloud Foundry Runtime
  • SAP Authorization and Trust Management Service
  • Docker for LangChain
  • Complete AI Core Onboarding
  • Access to the Git repository, Docker repository, and S3 bucket onboarded to AI Core.
  • Amazon Bedrock or Azure OpenAI account created an API key to access this account and deployed a Model on the services.
  1. Install SAP AI Core SDK. A Python-based SDK that lets us access SAP AI Core using Python methods and data structures using this link
  2. Install the Content Package for Large Language Models for SAP AI Core. A content package to deploy LLM workflows in AI Core following this link

 

Step 2. Build and Push Docker images

Dockers are used to Integrate our cloud infrastructure and push it to BTP.

  • Register our Docker registry,
  • Synchronize our AI content from our git repository,
  • Productize our AI content
  • Expose it as a service to consumers in the SAP BTP marketplace (for Langchain).
  1. Generate a Docker image with aicore-content create-image.
  2. Generate a serving template with aicore-content create-template.

Beware that using Langchain, Llama Index or other Open Source frameworks on BTP is easy and possible on any of the Resources available at BTP; we can find more information about the power of these instances by checking the AWS documentation for P3 and G4 families.

Langchain is a recommended framework to use in combination of BTP, it allows developers to use and build Tools, Prompts, Vector stores, Agents, Text splitters, Output parsers which are fundamental tools high quality LLM scenarios.

Step 3. Register Amazon Bedrock or Azure OpenAI LLMs as Artifacts

These are the instructions for registering essential artifacts and setting up a connection to SAP AI Core for deployment. Here is a summarized overview of the key steps:

  1. Register Artifacts: Create and configure JSON files for the following artifacts:
    • aic_service_key.json: Service key for our AI Core instance.
    • git_setup.json: Details of the GitHub repository that will contain workflow files.
    • docker_secret.json: Optional Docker secret for private images (use Docker Hub PAT).
    • env.json: Environment variables for Docker, Amazon Bedrock, or Azure OpenAI services.
  2. Connect to AI Core Instance: Use the AI API Python SDK to connect to our AI Core instance using the credentials from the service key file.
  3. Onboard the Git Repository: Register a GitHub repository with workflow files for training and serving. Provide repository details, including username and password.
  4. Register an Application: Create an application registration for the onboarded repository, specifying the repository URL, path, and revision.
  5. Create a Resource Group: Establish a resource group as a scope for registered artifacts. Modify the resource group ID based on our account type (free or paid).

 

Step 4. Deploy the Inference Service on AI Core as a Proxy

Set up and deploy an inference service in AI Core:
  1. Serving Configuration: Create a serving configuration that includes metadata about the AI scenario, the workflow for serving, and details about which artifacts and parameters to use. This configuration helps define how the inference service will work.
  2. Execute Code: Use the provided code to create the serving configuration. This code involves loading environment variables and specifying parameters for the Docker repository and Azure OpenAI service. The resulting configuration will be associated with a resource group.
  3. Verify Configuration: Once the serving configuration is successfully created, it should appear in AI Launchpad under the ML Operations > Configurations tab.
  4. Serve the Proxy: Use AI Core to deploy the inference service based on the serving configuration. The code provided will initiate the deployment process.
  5. Check Deployment Status: Poll the deployment status until it reaches “RUNNING.” The code will continuously check the quality and provide updates.
  6. Deployment Complete: Once the deployment is marked as “RUNNING,” the inference service is ready for use. The Deployment ID and other details in AI Launchpad under the Deployments tab.

 

Step 5. Use RAG Embeddings with a Vector DB Chroma on BTP or consume SAP Graph

Here are the key actions and steps for using RAG question Answering With Embeddings or In Context Learning from SAP Graph

Pre-requisites:

  • Create an object store secret in AI Core
  • Chroma vector database runs on a Docker on BTP.
  • Get handy on SAP Graph APIs or SAP Integration Suite APIs

How to set up and deploy Chroma DB, a vector database, as a Docker container. Here are the key steps and points covered:

  1. Chroma DB in Production: When using Chroma DB in production, running it in client-server mode is preferable. This involves having a remote server running Chroma DB and connecting to it using HTTP requests.
  2. Setting Up a Chroma Client: The text provides Python code for setting up a Chroma client that connects to the Chroma DB server using HTTP. It configures the client to interact with the server.
  3. Creating a Dockerfile for the Client: A Dockerfile is designed for the Chroma client to run it as a container service. This file specifies the base image, working directory, package installations, and default command.
  4. Docker Compose: A Docker Compose file defines two services: the Chroma client and the Chroma DB server. It also creates a network bridge to allow communication between these services.
  5. Running Chroma DB and the Client: The docker-compose up --build command creates containers and runs Chroma DB and the Chroma client. This command should be executed with Docker Desktop running.
  6. Consume SAP Graph from SAP Integration Suite following this video.
  7. Repository Link: A link to the complete repository for the Docker approach is provided.
  8. Use our deployment for question-answer queries, as demonstrated in the example provided on this URLhttps://pypi.org/project/sap-ai-core-llm/

https://mlizrraotg0z.i.optimole.com/cb:LJPV.22ab2/w:auto/h:auto/q:mauto/f:best/https://careerasylum.com/wp-content/uploads/2016/02/sapnwabline_885687.png

More Materials 📚

Begginer Materials for AI Core and AI Launchpad

Blog post series of Building AI and Sustainability Solutions on SAP BTP

Demo videos recorded by SAP HANA Academy

Useful links for SAP AI Core and SAP AI Launchpad

Amazon Bedrock, Azure OpenAI, and ChromaDB GitHub repositories

Scroll to Top