Bachelor of Science in Computer Science
Course ContentNeural Networks
Habari, Future AI Guru! Let's Build a Digital Brain!
Welcome to the most exciting part of Artificial Intelligence! Imagine you and your friends in a chama (a local savings group) are deciding on the best investment. Everyone brings their own information ("Safaricom shares are up!", "The price of maize is good this season."). You weigh each person's advice based on how knowledgeable they are, add your own gut feeling, and then the group makes a final decision. Congratulations, you've just acted like a Neural Network!
In this lesson, we'll break down how computers can learn to make smart decisions, just like our chama, by mimicking the human brain. Get ready to understand the magic behind everything from M-Pesa's fraud detection to apps that can diagnose crop diseases from a photo!
Image Suggestion: A vibrant, Afrofuturistic illustration of the Nairobi skyline at night. Glowing, interconnected lines, like a neural network, link major landmarks such as the KICC, Parliament buildings, and modern skyscrapers. The style is sleek and digital, with a purple and blue color palette, symbolizing the fusion of Kenyan culture and advanced technology.
The Basic Building Block: The 'Neuron'
Just like your body is made of cells, a Neural Network is made of neurons (sometimes called 'nodes'). A single neuron is like one person in your chama. It's a simple decision-maker.
A neuron does three simple things:
- Receives Information (Inputs): It takes in one or more pieces of data.
- Processes It (Applies Weights & Bias): It "thinks" about the information by giving each piece a certain importance (a 'weight').
- Gives a Result (Output): It makes a final decision or signal to pass on.
Here’s a simple look at a single neuron:
(Input 1) ---> | |
(Input 2) ---> | NEURON | ---> (Output)
(Input 3) ---> | |
The 'Uchumi' of a Neuron: Inputs, Weights, and Bias
Let's make this real. Imagine a neuron that has to decide: "Should I carry an umbrella today?"
The Inputs (x) are the data points it looks at:
- x1: Is it cloudy? (Let's say 1 for Yes, 0 for No)
- x2: What is the humidity? (A value from 0 to 1)
The Weights (w) are how much importance we give to each input. You might care more about clouds than humidity. So, the weights might be:
- w1 (for clouds): 0.7 (very important!)
- w2 (for humidity): 0.4 (kind of important)
The Bias (b) is like your personal starting point. Maybe you're a cautious person who tends to carry an umbrella anyway. Your bias might be a small positive number, like 0.1. It's an extra value that helps the network make better decisions.
The Big Decision: The Activation Function
After the neuron multiplies the inputs by their weights and adds the bias, it gets a number. But we don't want just any number; we often want a clear decision, like "Yes" or "No".
This is where the Activation Function comes in. It's a special mathematical function that takes the calculated number and squashes it into a useful output, often between 0 and 1.
A very common one is the Sigmoid function, which produces an S-shaped curve. A value close to 1 means "Yes, definitely!", a value close to 0 means "No, not at all!", and 0.5 means "I'm not sure."
Image Suggestion: A clear, simple diagram of a biological neuron next to a digital perceptron (artificial neuron). The biological neuron has dendrites, a cell body, and an axon. The digital neuron has inputs (x1, x2), weights (w1, w2), a summation symbol (Σ), and an activation function leading to an output. Arrows clearly label the corresponding parts to show the analogy.
Let's Do the Math! (A Simple 'Forward Pass')
Let's calculate the decision for our umbrella neuron. This process of passing inputs through the network to get an output is called a Forward Pass.
Scenario: It's very cloudy (x1 = 1) and quite humid (x2 = 0.8).
Our weights are w1 = 0.7, w2 = 0.4 and our bias is b = 0.1.
Step 1: Calculate the weighted sum (we'll call it 'z')
z = (x1 * w1) + (x2 * w2) + b
z = (1 * 0.7) + (0.8 * 0.4) + 0.1
z = 0.7 + 0.32 + 0.1
z = 1.12
Step 2: Apply the Activation Function (Sigmoid)
The Sigmoid formula is: 1 / (1 + e^(-z))
Output = Sigmoid(1.12)
Output = 1 / (1 + e^(-1.12))
Output = 1 / (1 + 0.326)
Output ≈ 0.75
The output is 0.75. Since this is much closer to 1 than 0, the neuron's decision is: "Yes, you should probably carry an umbrella!"
From One Neuron to a 'Harambee': The Full Network
A single neuron is smart, but the real power comes when we connect many of them together in layers, just like a community working together (Harambee!).
- Input Layer: The first layer that receives the raw data (e.g., the pixels of an image, the details of a transaction).
- Hidden Layers: One or more layers in the middle. This is where the real "thinking" happens. Neurons in these layers find complex patterns.
- Output Layer: The final layer that gives the answer (e.g., "This is a cat," "This transaction is fraudulent").
HIDDEN LAYER
+------------+
INPUT LAYER | | OUTPUT LAYER
+-------+ | /-----\ | +--------+
| Input |---->O--|Neuron1|---O-->| Output |
+-------+ | \-----/ | +--------+
| |
+-------+ | /-----\ |
| Input |---->O--|Neuron2|---O
+-------+ | \-----/ |
| |
+-------+ | /-----\ |
| Input |---->O--|Neuron3|---O
+-------+ | \-----/ |
+------------+
Neural Networks in Our Kenya: Real-Life Magic!
Example 1: M-Pesa Fraud DetectionThink about the millions of M-Pesa transactions every day. How does Safaricom spot a potential thief? A neural network can be trained on a huge dataset of both normal and fraudulent transactions. It learns the hidden patterns: Is the amount unusually large? Is it being sent at 3 AM to a new number? Is the location strange? The network can flag suspicious activity in real-time, protecting our money!
Example 2: Agriculture and 'Shamba' IntelligenceA farmer in Uasin Gishu can take a photo of a sick maize leaf. An app powered by a neural network can analyze the image, identify the specific disease (like Maize Lethal Necrosis), and suggest a treatment. The network was trained by looking at thousands of photos of healthy and diseased plants until it became an expert botanist!
How Do They Learn? Like a Student Before an Exam!
So how do the weights and biases get their perfect values? The network has to learn. This process is called training.
It works like this:
- The network is given a problem where we already know the answer (e.g., a photo of a cat, with the label "cat").
- It makes a guess (a forward pass).
- It checks how wrong its guess was (this is called the 'loss' or 'error').
- It then goes backward through the network and slightly adjusts all its weights and biases to make a better guess next time. This adjustment process is called Backpropagation.
Imagine studying for an exam. You do a practice test, check your answers, see what you got wrong, and then go back to your books to study those topics more. A neural network does this millions of times, getting a little smarter with each example, until it becomes an expert.
Your Turn! A Glimpse of Code
You don't have to build all the math from scratch. Powerful libraries like TensorFlow and PyTorch make it easy. Here’s what a simple neural network model might look like in Python using the Keras API:
# This is a conceptual example in Python using a popular library
import tensorflow as tf
from tensorflow import keras
# Define the model structure, like stacking building blocks
model = keras.Sequential([
# Input layer and the first hidden layer with 128 neurons
keras.layers.Dense(128, activation='relu', input_shape=(784,)),
# A second hidden layer with 64 neurons
keras.layers.Dense(64, activation='relu'),
# The output layer with 10 neurons (e.g., for digits 0-9)
keras.layers.Dense(10, activation='softmax')
])
# 'Compile' the model by telling it how to learn
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Now, we would train the model with data!
# model.fit(training_images, training_labels, epochs=10)
Don't worry about understanding every line right now. Just see how we can define the layers of our network in a few simple commands. You are closer to building this than you think!
Conclusion: You Are the Architect!
Neural networks are not magic; they are brilliant tools inspired by the brain, built on simple mathematical ideas. From securing our mobile money to helping farmers feed the nation, their potential in Kenya is limitless. You've taken the first step to understanding them today. Keep asking questions, keep experimenting, and soon you'll be the one designing the next great AI solution!
Pro Tip
Take your own short notes while going through the topics.