Machine Learning Architect: For all your AI and GenAI needs

  • Machine Learning Model Parameters and Memory Usage

    The **parameters** in a machine learning (ML) model directly affect the **memory usage** because they determine the amount of data the model needs to store and process during training and inference. The more parameters a model has, the more memory it consumes. Here’s a breakdown of how this works: ### 1. **Memory for Storing Parameters**…

  • Are parameters known prior to Training an ML Model?

    ### Are Parameters Known Prior to Training an ML Model? The process of determining the parameters of a model is known as **training**. During training, the model learns from the data by iteratively adjusting its parameters to optimize a given objective function (also known as a loss function). So – No, **parameters are not known…

  • Use Case – Medical Image Classification

    Use Case – Medical Image Classification Uses models based on Convolutional Neural Net and Transformer Models. On Google Cloud Platform (GCP), several services support Convolutional Neural Networks (CNNs) and Transformer models, enabling you to train, deploy, and scale these deep learning models. Here are the primary GCP services that are most relevant for CNNs and…

  • Clustering Use Cases – Unsupervised Machine Learning on GCP

    Clustering  is one of the most common patterns in Unsupervised machine learning. Some areas / use cases where we can apply clustering include: Market segmentation Social network analysis Search result grouping Medical imaging Image segmentation Anomaly detection BigQuery ML: Is ideal for clustering use cases (and SQL-based machine learning use cases). Apart from BQML,  GCP…

  • Use Case – Text Generation, NLP, Sentiment Analysis – Transformers

    Transformers BERT (Bidirectional Encoder Representations from Transformers): Used for a variety of NLP tasks like sentiment analysis and question answering. GPT (Generative Pre-trained Transformer): Used for text generation and completion tasks. Vision Transformers (ViT): Adapted for image classification tasks, where the image is divided into patches that the transformer processes as sequences.Architecture: Self-Attention Mechanism: The…

  • Use Case – Remove noise from images, generate new sample images

    Autoencoders are a type of artificial neural network used for unsupervised learning tasks. They are designed to learn efficient representations of data, typically for the purpose of dimensionality reduction or data compression. The basic idea is to encode the input data into a lower-dimensional representation and then decode it back to the original input data.…

  • Use Case – Online Forecasting Model for Various Interfaces

    Online Forecasting Model that needs to work across Web UI, Google Assistant as well as Dialogflow. Vertex AI Prediction Service is a fully managed service – designed to handle at scale requests. It supports both online and batch predictions. On the other hand, if it is not an online model and your dataset is related…

  • Use Case – Labeling large datasets

    For Large Datasets – Use Vertex AI’s Data Labeling for your classification model

  • Use Case – Sudden Inference Degradation

    If the quality of inferences suddenly goes down, ensure that you have Model Monitoring turned on (in Vertex AI). It continuously tracks performance of your model in Production Monitored Metrics include: ACCURACY PRECISION RECALL

  • Use Case – Build a Recommendation Engine – Different Model Performances against Test and Training Data

    Use Vertex AI Experiment – to explore the output of different models, while keeping track of the inputs and various stages of  the runs.

Got any book recommendations?