Managing AI Models with SAP AI Core on BTP: Setup to Deployment Using Docker, GitHub & Object Store

Introduction

Managing AI assets in a scalable, standardized way can feel like taming a wild beast—unless you’re using SAP AI Core. This service on the SAP Business Technology Platform (BTP) provides a powerful engine for running AI workflows and model serving workloads, complete with integration capabilities for tools like Docker, GitHub, and Object Store. But enough with the buzzwords—let’s dive in and make AI magic happen, one model at a time!

SAP Documentation: What Is SAP AI Core? | SAP Help Portal

 

Architecture Overview

AbhijeetK_1-1732075766126.png

 

To get things rolling, let’s break down the architecture (don’t worry—no complicated blueprints involved).

  • SAP AI Core is your main engine for AI workflows and model serving.
  • SAP AI Launchpad is like a friendly control tower, managing multiple AI runtimes as a SaaS application.
  • AI API? It standardizes managing the AI lifecycle across different runtimes. Simple, right?
  • Git Repository stores your training and serving workflows as templates.
  • Docker Repository? It handles custom Docker images (think bundled code with all dependencies included).
  • Data Storage is where input/output artifacts like training data and models chill out.

 

Initial Setup Steps

Setting up can sound intimidating, but it’s just a series of simple steps (with a few cups of coffee along the way):

  1. Create a Subaccount on AWS (SAP AI Core is currently supported here).
  2. Enable Cloud Foundry and create a space.
  3. Add Service Plans for SAP AI Core and SAP AI Launchpad.
  4. Create Service Instances for both and generate service keys for access.

AbhijeetK_3-1732077595209.png

AbhijeetK_4-1732077662980.png

 

 

Voila! You’re ready to connect and roll.

 

Integrating Docker, GitHub, and Object Store

Docker Repository

Here’s where we package our AI code into tidy Docker images. SAP AI Core can fetch these images and ensure all dependencies (yes, even the finicky ones) are ready to go. Think of Docker as the suitcase that fits everything you need for your AI journey.

A Docker repository is essential for storing AI code in the form of Docker images on the cloud. SAP AI Core accesses this repository to retrieve the code. The Docker image ensures that the code is packaged with all necessary dependencies, directory structures, and drivers needed for GPU utilization.

Key Components of the Docker Image:

  1. Machine Learning Code – The main Python script.
  2. Dependencies – Packages required by the ML code, specified in the requirements.txt file.
  3. Dockerfile – Instructions for building the Docker image

Example Dockerfile:

AbhijeetK_0-1732074906740.png

Pro tip: Organizations with more than 250 employees require a paid Docker Desktop subscription. For this reason, many IT teams opt for alternatives like Podman. Also, remember to store Docker credentials using secrets in the AI Launchpad to allow SAP AI Core to pull from private repositories. Just make sure your .dockerconfigjson format is correct—it’ll save a few headaches!

Storing Docker Credentials:

To allow SAP AI Core to pull Docker images from a private repository, storing Docker credentials is necessary:

Add a Docker Registry Secret in AI Launchpad:

AbhijeetK_2-1732076380845.png

Note: SAP AI Core does not verify Docker credentials.

Need to store Docker credentials? Use secrets in AI Launchpad. Pro tip: Double-check your .dockerconfigjson format (it saves headaches later).

 

GitHub Repository

This is where your AI workflows live and thrive. Configuration files in YAML format hold the instructions for SAP AI Core. Connect using a GitHub access token, and you’re set.

Object Store

Managing input/output artifacts like training data and models? Enter the Object Store—think of it as your cloud storage (AWS S3, Azure Blob, Google Cloud Storage, etc.). Adding data artifacts requires tools like Postman or the AI Core SDK; AI Launchpad won’t cut it for this step. What Is Object Store? | SAP Help Portal

 

Building AI Workflows

Training Phase
  1. Prepare Your Python Script (with dependencies).
  2. Create a Docker Image and push it to Docker Hub.
  3. Sync YAML Files to create configurations and deployments in AI Launchpad.
  4. Execute the Workflow and watch as your AI model trains and shines.
Serving Phase

Craft a serving script using tools like Flask. Build, push, configure, and deploy—then get ready to make predictions. Just remember, each deployment URL is unique!

Addressing the Unique Deployment URL Challenge

When you deploy a machine learning model in SAP AI Core, it’s exposed via a deployment URL for consumption. By default, each new deployment results in a unique URL. While this allows for flexibility, it can complicate integration with applications that expect a consistent endpoint.

Strategies to Maintain a Static Deployment URL:

  1. Patching Existing Deployments: SAP AI Core offers a feature to update (or “patch”) an existing deployment. This means you can modify configurations or update the model without changing the deployment ID, thereby maintaining the same URL. Utilize the AI Launchpad or AI Core SDK to apply patches, ensuring a consistent endpoint for connected applications.

  2. Implementing a Reverse Proxy: Setting up a reverse proxy to route requests from a static URL to the dynamic deployment URL allows applications to interact with a consistent endpoint. This approach requires additional setup and maintenance but ensures consistent access for external systems.

Visual Studio Code Toolkit

For those who like one-stop development, the VS Code Toolkit connects directly to AI Core. Build, configure, and deploy with ease, all within the familiar VS Code interface.

Practical Examples and ML Workflow Management with SAP AI Core

Getting Started with Scripts and Workflows

For practical examples of Python scripts, YAML files, Dockerfiles, and more, you can explore this resource on SAP Developers. Here’s a brief overview of key steps to get your AI scripts served via SAP AI Core:

Steps to Deploy a Script using SAP AI Core:

  1. Create a Docker Image: Include your main.py, requirements.txt, and Dockerfile.
  2. Push the Image to Docker Hub.
  3. Connect GitHub to SAP AI Core and create an application from a YAML file.
  4. Create a Configuration in AI Core.
  5. Deploy your application in AI Core.

Training Phase for ML-Flow

Training your machine learning models involves the following:

  1. Obtain or Create Your Python Script from the data scientist.
  2. Understand Your Code: Identify required libraries and data imports.
  3. Create a YAML File (type: WorkflowTemplate) in your GitHub repository. Specify details such as the Docker image name, Docker Hub info, placeholders for cloud storage, input types, etc.
  4. Create an Application in AI Launchpad (or use VS Code) to access the new YAML file.
  5. Synchronize to view your new scenario, including workflow executables and parameters/input artifacts as defined in your YAML.
  6. Upload Training Data to your Cloud Storage if needed.
  7. Modify Your Python Script:
    • Define variables for the training data path in the Docker image (as specified in the YAML file).
    • Specify the model save path.
    • Adjust other YAML-referenced variables.
  8. Save Your Trained Model.
  9. Create a requirements.txt File for listing required libraries.
  10. Create a Dockerfile to package the environment, including placeholders for data storage.

Building and Pushing the Docker Image:

  • Start a Podman Virtual Machine if using Podman as an alternative to Docker Desktop.
  • Navigate to the Directory containing your files (main.py, requirements.txt, Dockerfile).
  • Build the Image: Use podman build –tls-verify=false -t docker.io/<YOUR_DOCKER_USERNAME>/<YOUR_IMAGE_NAME>:<VERSION> . (Note the space and dot at the end.)
  • Push the Image to Docker Hub.

Once ready, create a configuration in AI Launchpad (or VS Code), select your scenario, and create an execution to train the dataset. Your trained model will then be accessible in the Object Store.


Serving Phase for ML-Flow

To serve a trained model, follow these steps:

  1. Create a Serving Python Script:
    • Utilize a framework like Flask, for example, main.py.
    • Create a serving engine using app = Flask(__name__).
    • Define a prediction call with @app.route(“/v1/predict”, methods=[“POST”]).
    • Load request data and the trained model, execute predictions, and return the response.
  2. Create a ServingTemplate YAML File in your GitHub repository, specifying details like your Docker image, Docker Hub, and port.
  3. Create an Application in AI Launchpad (or VS Code) and verify synchronization.
  4. Prepare Necessary Files:
    • requirements.txt for required libraries (e.g., Flask).
    • Dockerfile for packaging the environment.
  5. Build and Push the Docker Image following similar steps as in the training phase.
  6. Create a Configuration and deploy your scenario. An API Endpoint URL will be provided for making predictions.

 

Conclusion

By combining SAP AI Core with Docker, GitHub, and Object Store, you’ve got a powerful toolkit for scalable AI model management. So, whether you’re deploying complex ML workflows or serving models faster than you can say “SAP AI Core,” you’re now equipped to handle it all—no sweat (well, maybe a little).

Ready to make some AI magic happen? Let’s go!

 

Scroll to Top