AI Workbench toolsets are to enable IDEA users to perform all stages of AI/ML activities. IDEA platform intends to support the following AI/ML activities.
Please standby, while we are retrieving your information
AI & ML
AI & MLArtificial intelligence (AI) and machine learning (ML), also known as AI & ML, is significant breakthrough in computer science and data processing that is rapidly changing a diverse range of industries. Businesses and other organisations that are undergoing a digital transition are confronted with a mounting data tsunami that is both highly valuable and becoming more and more difficult to gather, handle, and analyse. New tools and approaches are required to handle the enormous amount of data being generated, mine it for insights, and act on those insights when they are discovered.
AI & ML Features
A brief summary
Annotator Ground Truth
It is an IDEA AI Workbench tool to enable labelling structured / tabular datasets for ML model training as well as varieties of unstructured datasets such as images for cognitive AI model training.
The indicative UI for the AI Annotator Ground Truth job specification would require the user to specify input and output data locations and media & annotation details.
This enables IDEA user to use cloud native tools as much as possible for annotating supported media types and annotation types. IDEA platform would fill the gap of unsupported media and annotation types through provisioning top picks from Open-Source Annotator tools.
IDEA supports AI Workbench for annotating most of the media types namely Image, Video, Text and Audio and their various annotation types.
This AI Workbench tool is to enable user to work with input data to experiment with data for visualisation, cleaning, and feature engineering to generate data ready for downstream AI/ML model training - to achieve candidate data preparation script and data preparation model.
The indicative UI for the AI Build job specification would require user to select or create desired resources in the AI Build Workspace to be able to perform experiments with data.
This service is primarily meant for pre-processing raw training data to prepare training data such that subsequent steps of AI-Train can directly consume the data so prepared.
The AI-Build job mainly consumes input data as reusable dataset across these kinds of jobs. The dataset could already exist as registered dataset in cloud native ML workspace service or AI-Build Job would need to provision its creation and registration.
This AI Workbench tool enables user to work with input data to experiment with training AI/ML models, hyperparameters on a variety of domain problem types and standard ML frameworks to build a trained ML model.
The indicative UI for the AI Train job specification would require user to select or create desired resources in the AI Train workspace to be able to perform model training.
The interaction between IDEA platform and cloud native ML involves these components
- Cloud native ML platform used in interactive mode for experimentation through the sample code to achieve optimal training script during development phase of given AI/ML problem.
- Trained ML Model used for evaluation for the model.
- Training Script is true outcome to be taken for operationalization, i.e., it can be used for integration into AI Reproducible MLOps.
Models registered in the cloud model registry workspace are supported for deploying as live endpoints for real-time inference. The capability of selecting cloud registered models from IDEA UI is provided.
The uniqueness of endpoint names is ensured. Also monitoring the status of deployment within IDEA UI is provisioned.
Reproducible MLOps Pipelines
For Azure and AWS, IDEA provides facility to register Kubeflow ML pipeline jobs. From the IDEA UI, users can navigate to the Kubeflow dashboard and create/run and monitor the MLOps pipeline jobs. It covers
- Kubeflow setup on managed and horizontally scalable cloud cluster
- Secure MLOps through tenant resource isolation, profiles, RBAC, etc
- Kubeflow Pipeline components for orchestrating key ML steps of pipeline Cloud Platform (Container Registry, Train, Model Registry, Model Deployment)
Cloud Native (GCP)
This module supports cloud native Kubeflow MLOps pipeline creation for GCP through IDEA UI. User can provide build, train and deploy input parameters through UI and create a GCP cloud native Kubeflow pipeline, which they can visualise in the Vertex AI Pipelines dashboard.
- Development Phase: Orchestrator enables Rapid ML Experimentation
- Production Phase: Orchestrator helps automate execution of the ML pipeline based on a schedule or certain triggering conditions
- Versioning of pipelines, experiments tracking ,and analytics
Explainable AI allows users to comprehend and trust the results and output created by machine learning algorithms. Model Interpretability is critical for data scientists, auditors, and business decision makers alike to ensure compliance with company policies, industry standards, and government regulations.
AI Trust is all about the interpretability of results produced by AI/ML models during training. Interpretability is essential for
- Model debugging
- Detecting fairness issues
- Human-AI cooperation
- Regulatory compliance
This AI Workbench tool enables IDEA User to work with input data to experiment with training AI/ML Models, hyperparameters on variety of domain problem types and standard ML frameworks to build an explainable ML Model.