Welcome to the last week of the AI Developer Challenge! Thank you so much for your collaboration! I love to see the exchange in the discussion! This week we want to take our model and deploy it to then make predictions using our model.
Now in the real world we would retrain the model with some feature engineering and by using the outcome of our feature importance. But we only have 5 Wednesdays in May and we want to finish the process of training and serving a model with Data Attribute Recommendation (DAR)! You are welcome to try to improve your model share it here as well! Maybe start with feature selection and train a model using only the most important features this time?
Here is what we have to do to deploy a model using DAR:
-
Use the POST Create Deployment Configuration Regression request. Assign the value of the deploymentExecutableIdRegression (=”70db0a45-c2ab-4525-91c5-0ff763d3784f”) in your environment. You can also find that ID using the GET List Executables request (make sure to use version 3 of the deployment executable for model template ‘bdbcd699-4419-40a5-abb8-e7ad43dde49b’!)
- Double check that you are deploying the correct model using the GET List Artifacts request and compare the artifact ids (modelArtifactIdRegression). The scenario ID is still assigned from the previous steps.
- After creating the configuration use the POST Deploy Model Regression request to deploy your model. Your status will be UNKNOWN. Check the status using GET Get Deployment Regression details request. Your status should be RUNNING now. With the deployment url you get in the response, you can now query your model.
- Go to last folder of the collections DAR API – Inference and open the POST POST Inference Regression request. Assign the deployment id from the previous step to the variable deploymentUrlRegression
- Paste the following json into your body to predict avocado prices for the following two data entries:
“topN”: 1,
“objects”: [
{
“objectId”: “optional-identifier-1”,
“features”: [
{“name”: “Date”, “value”: “2015-12-30”},
{“name”: “TotalVolume”, “value”: “74237.72”},
{“name”: “PLU4046”, “value”: “1037.64”},
{“name”: “PLU4225”, “value”: “36636.85”},
{“name”: “PLU4770”, “value”: “52.16”},
{“name”: “TotalBags”, “value”: “7589.96”},
{“name”: “SmallBags”, “value”: “9603.53”},
{“name”: “LargeBags”, “value”: “84.39”},
{“name”: “XLargeBags”, “value”: “0.0”},
{“name”: “type”, “value”: “conventional”},
{“name”: “year”, “value”: “2015”},
{“name”: “region”, “value”: “Albany”}
]
},
{
“objectId”: “optional-identifier-2”,
“features”: [
{“name”: “Date”, “value”: “2015-03-16”},
{“name”: “TotalVolume”, “value”: “140487.57”},
{“name”: “PLU4046”, “value”: “4942.49”},
{“name”: “PLU4225”, “value”: “88080.11”},
{“name”: “PLU4770”, “value”: “5551.28”},
{“name”: “TotalBags”, “value”: “8697.87”},
{“name”: “SmallBags”, “value”: “8703.62”},
{“name”: “LargeBags”, “value”: “92.35”},
{“name”: “XLargeBags”, “value”: “0.0”},
{“name”: “type”, “value”: “conventional”},
{“name”: “year”, “value”: “2015”},
{“name”: “region”, “value”: “Indianapolis”}
]
}
]
}
6. To SUBMIT your result post a screenshot of the prediction:
If you have made it here, GREAT WORK! You have completed 5 weeks of learning about SAP’s AI Services! Thank you so much for participating and helping each other out! It is so much fun to be part of this community! We would love to get your feedback on the AI Services! Let us know what your thoughts in the comments!
What you have learned:
- Document Information Extraction Premium is a genAI based tool that helps you to extract information from documents.
- You learned how to use the UI as well as the Python SDK!
- Data Attribute Recommendation can be applied to different machine learning problems!
- You have learned how to upload a dataset and train and deploy a regression model!
- We used Postman to implement the end-to-end machine learning workflow!
Additional Information
We have used the AI API Postman collection to train and deploy our model. You can do the same using the AI API Python SDK.
You can also connect your DAR instance to SAP AI Launchpad to do the training and serving.
For the inference you can either do a standard REST Call or use the DAR SDK.