Back to Career

Artificial Intelligence and Machine Learning Professor

Andrés Bello Catholic University

February - September 2024 Part-Time

I taught about the fundamentals of artificial intelligence and machine learning, neural networks, convolutional neural networks, recurrent neural networks, generative adversarial networks, and reinforcement learning.

My Experience as an AI and Machine Learning Professor at Andrés Bello Catholic University

In early 2024, I received an opportunity to teach at Andrés Bello Catholic University. The moment was important. Many engineering students wanted to learn about artificial intelligence and machine learning, but they did not have a course that showed them both the theory and real-world use. Universities often teach one or the other—either just math and theory, or just code without understanding. I wanted to change that.

What I Wanted to Achieve

My goal was clear but difficult. I wanted to build a course that:

  1. Started simple: Students with no AI experience could understand the basics
  2. Grew in difficulty: Once students understood basic ideas, they learned advanced topics
  3. Was practical: Students would not just write formulas. They would build real things that work
  4. Led to real skills: When the course ended, students could get jobs using what they learned

I had eight months to teach everything from classical machine learning to advanced deep learning. That was my challenge.

How to Teach Complex AI Without Losing Students

Designing the course was not easy. AI is a huge field with many complex ideas.

When I started planning, I saw three big problems:

Problem 1: Too Much Information Students needed to learn so many concepts: classification (sorting data into groups), regression (predicting numbers), clustering (finding similar items), neural networks (computer brains), convolutional neural networks or CNNs (for images), recurrent neural networks or RNNs (for sequences like text), and advanced ideas like transformers (the technology behind ChatGPT).

Problem 2: The Gap Between Theory and Practice Many students could memorize that an algorithm like SVM (Support Vector Machines) creates a “maximum-margin hyperplane” (a perfect line that separates two groups), but they did not know what this meant in real life. They did not know how to use it.

Problem 3: Students Got Bored and Forgot Traditional lectures did not work. Students would listen, take notes, leave class, and forget everything by next week.

My Teaching Approach

To solve these problems, I designed the course in four stages:

Stage 1: Teach Classical Foundations

I started with machine learning basics that students could see and understand:

  • Classification: Using algorithms like SVM with kernel functions (special math tricks that let us work with complex data)
  • Regression: Predicting future numbers based on past patterns
  • Clustering: Finding groups in data that we had not seen before
  • Dimension reduction: Simplifying large data without losing important information

Stage 2: Build Understanding with Neural Networks

Next, I taught how computers can learn like brains:

  • How neurons work and how they connect
  • Activation functions (gates that turn signals on and off)
  • Backpropagation (the way neural networks learn from mistakes)
  • Optimization techniques (how to make learning faster)

Stage 3: Real-World Applications with Deep Learning

Then students saw how these ideas create real things:

  • CNNs: Convolution and pooling operations for image recognition and object detection
  • RNNs: LSTMs and GRUs (special neural networks) for analyzing time sequences and predicting what comes next
  • Image Processing: Using Segment Anything Model (SAM) for image segmentation
  • Audio Processing: Using Librosa library to process sound waves
  • NLP: Natural Language Processing with transformers to understand human language
  • Generative Models: Creating new images, text, and sound
  • Reinforcement Learning: Teaching machines to play games and make decisions

Stage 4: Make Learning Stick with Interactive Techniques

Most importantly, I changed how I taught:

Interactive Fill-in-the-Blank Activities: During lectures, I wrote notebooks on the Jupyter but left blanks. I asked students to complete them. This forced them to think. They could not just sit and listen—they had to participate. When they filled in the blank correctly, they understood the concept.

Opinion Polling: Before teaching a solution, I asked students: “What do you think will happen if we do this?” They made guesses. Then, we tested their ideas together. When they discovered they were right, they felt smart. This made them want to learn more.

How Students Applied What They Learned?

The best part was that students did not just learn theory. They built 12 real projects that companies actually use:

  1. ChromAI: Machine learning to choose the perfect text color (black or white) based on background color
  2. ScreenBuddy: A movie recommendation system using collaborative filtering (recommending based on similar users)
  3. Identity: Facial recognition system that knows who someone is from a photo
  4. AlexNet: Identifying objects and things inside images
  5. RemoveBG: Removing background from photos automatically using AI
  6. Upscaling: Making small images bigger and clearer without losing quality
  7. Denoiser: Removing background noise from audio recordings
  8. Bitcoin RNN: Using recurrent neural networks to predict bitcoin prices
  9. Traductor: Translating text from English to Spanish using transformers
  10. Third-Party Models: Learning to use pre-trained models from HuggingFace and Replicate
  11. AIFriend: A chatbot that talks to you like a friend with human voice
  12. Rover Landing: Reinforcement learning agent that learns to land a rover on the moon

Students also learned to deploy these projects to real cloud platforms like Azure, HuggingFace, and Cloudflare. They learned not just to write code, but to put that code on the internet where real people can use it.

The Results

The numbers and feedback showed success:

  • 5.91 out of 6: Students gave me the highest evaluation score possible
  • 12 Complete Projects: Every student completed all 12 hands-on projects
  • Full Journey: Students went from not knowing what a neural network was to building and deploying transformer models, reinforcement learning agents, and generative AI systems
  • Job-Ready Skills: When students left class, they could write code for real AI projects, not just pass a test
  • Proof of Impact: The projects are in GitHub. Anyone can see that real work was done
  • Understanding Over Memorization: Students understood why algorithms work, not just how to use them

This was not just teaching. This was showing students what they could build when they understood AI.

Here are some of the projects developed during the course:

ChromAI

ChromAI

Class 1. ChromeAI is an AI that determines the optimal font color (light or dark) based on the background color, improving readability in web interfaces. It uses machine learning techniques to predict whether text should be white or black based on the provided background color.

github.com
ScreenBuddy

ScreenBuddy

Class 2. ScreenBuddy is an artificial intelligence app that recommends movies based on the preferences of other users with similar likes. The project combines theory and practice in recommendation systems, machine learning, and data analysis.

github.com
Identity

Identity

Class 3. Identity is an artificial intelligence project that implements a facial unlocking system. Given a person's image, the model is able to identify who it is and, if appropriate, unlock the device. This approach seeks to simplify device access, avoiding traditional methods such as PINs or patterns.

github.com
Alexnet

Alexnet

Class 4. This project focuses on developing an artificial intelligence capable of identifying elements in images. Through a series of practical implementations, different convolutional neural network (CNN) architectures are explored.

github.com
RemoveBG

RemoveBG

Class 5. This project aims to develop an Artificial Intelligence model capable of automatically removing backgrounds from images.

github.com
Upscaling

Upscaling

Class 6. This repository contains a Jupyter Notebook that demonstrates how to use pre-trained AI models Hugging Face and Replicate to perform image super-resolution. The goal is to upsample a small image (e.g., 96x96) to a larger one (e.g., 512x512) by improving its quality, rather than simply scaling the pixels.

github.com
Denoiser

Denoiser

Class 7. This project is the result of a class on audio processing. The main objective is to develop an artificial intelligence capable of removing background noise from an audio recording.

github.com
Bitcoin price predictions

Bitcoin price predictions

Class 8. This project uses a recurrent neural network (RNN) to analyze the price of Bitcoin and predict its fluctuations.

github.com
Spanish to English Translator

Spanish to English Translator

Class 9. This project implements an artificial intelligence model to translate text from Spanish to English, using Natural Language Processing (NLP) techniques and a transformer. It also includes a web interface for interacting with the translator.

github.com
Usage of third party models

Usage of third party models

Class 10. This repository contains a Jupyter Notebook tutorial that demonstrates how to deploy and use third-party AI models using tools such as Cloudflare AI Workers or Azure ML Studio.

github.com
AIFriend

AIFriend

Class 11. AIFriend is an artificial intelligence voice-to-voice designed to converse with you. You can speak to it and it will respond in a human voice, simulating a conversation with a friend. This project was born as a solution to the growing lack of social interaction in an increasingly digital world.

github.com
Lunar Rover Landing

Lunar Rover Landing

Class 12. This project focuses on training a reinforcement learning agent to land a rover on the moon safely and efficiently.

github.com