Run Jupyter Notebooks on Red Hat OpenShift Data Science
Run Jupyter Notebooks on Red Hat OpenShift Data Science - WIP
Prerequisites
- Configure RHODS workspace has been completed.
Run Jupyter notebooks
- prepare_env.ipynb
- Train model.ipynb
- Test_Model.ipynb
- Store_Model.ipynb
Change the minio vaules and run the notebook
Create pipelines with Tekton integration
Create a file called demo.pipeline
Click on Pipeline Editor Rename pipeline to demo pipeline
Configure Pipeline Runtime
- Data Science Pipelines API Endpoint*
https://ds-pipeline-pipelines-definition-mltesting.apps.mltesting.example.com
- Public Data Science Pipelines API Endpoint
- https://rhods-dashboard-redhat-ods-applications.apps.mltesting.sandbox2831.opentlc.com/pipelineRuns/mltesting
- Data Science Pipelines API Endpoint Password Or Token
- OpenShift API token
- Cloud Object Storage Endpoint*
- https://minio-api-mltesting.apps.mltesting.sandbox2831.opentlc.com
- Cloud Object Storage Bucket Name*
mybucket
- Cloud Object Storage Credentials Secret
- ` aws-connection-mybucket`
- Cloud Object Storage Username
- minio
- Cloud Object Storage Password
- minio123
The data volume for each job is below
- Mount Path*
/opt/app-root/src/
- Persistent Volume Claim Name*
pins-workspace
Optional: Add GPU information
- GPU:
1
- GPU Vendor:
nvidia.com/gpu
Final View of the pipeline
Start pipeline run
You should see this screen on scuessful run