Introducing Ailie_Net - My AI Library for Hands-On Beginners
- The Founder

- May 6
- 7 min read
Updated: Jul 18
Today, I introduce Ailie_Net, my open-source library for creating Neural Networks in Python.

What is Ailie Net?
Ailie Net is my open-source AI library designed to be as simple as possible. The aim is to offer beginners an easy-to-use library as a practical tool to learn how neural networks work behind the scenes, allowing users to explore their own ideas on top of a working, simplistic baseline.
Built primarily on vanilla Python and NumPy, this library has a basic architecture without any special features. Keeping the library simple provides users with a foundation to experiment, allowing them to modify the library to suit their needs and apply more complex features to expand the library's capabilities.
As Ailie Net does not use any hardware-specific dependencies, such as for GPU acceleration, this library has the benefit of being highly compatible with a range of devices. For example, the full version can be installed on both a laptop and a Raspberry Pi, without the need to slim down the library or create configuration-specific drivers.
Why Was It Created?
Whilst at university, I had multiple robotics and other projects that could have benefited from integrating AI, such as classifying images or helping my robots make decisions. However, I found many issues when learning how to use most of the existing libraries, which, due to already slim time constraints, led me to abandon AI for some of these projects. Now that I have finished university, I want to address some of these issues by creating my own library.
It doesn't take long to find a vast range of useful and sometimes extremely powerful AI libraries available for use, along with plenty of tutorials online. Of course, the more powerful and complex the library, the more computationally taxing it will be on the device - a big limitation when wanting to use it on an embedded device like a Raspberry Pi or an Arduino.
However, the main issue I found was that most tutorials talked more about how to use the sample code rather than how it works behind the scenes - A sort of "download my code and replace the training images with your own" approach. This is great if you just need something quick that just works. But for me, I wanted something that would be easier to learn how it works, so I can mould it more to my needs.
In some cases, it was a struggle to get much past the installation stage, with all the ins and outs of different tools, dependencies, and hardware drivers you need to get things started. And of course, I spent a lot of time blindly entering command prompt instructions from forums to resolve issues I was too unfamiliar with to begin to understand - a less-than-ideal learning experience.
After all the hassle of trying to get things to work, and trying to understand how they worked, I just wanted to throw everything away. The plan was to:
Start from scratch.
Make it as easy as possible.
If I don't need anything complex, then don't make anything complex.
So simple, they can pull it apart and learn how to make their own library
Written primarily in vanilla Python, with Numpy to simplify and quicken the mathematics, I want the library to be as simple to learn how it works as it is to use, with just a basic knowledge of Python being the only requirement. And by limiting the number of external dependencies, the aim is to make it as cross-compatible on different architectures and operating systems as possible, and less difficult to translate into alternative programming languages.
Advertisements provided by Google AdSense
Installing Ailie_Net
To get started, you need to install a copy of the library. You can choose to do this from either an already-built or a build-it-yourself option.
For the ready-to-use option, the latest published build can be downloaded from the Python Package Index (PyPI) using the pip command:
pip install ailie-netMore information on this option can be found at: https://pypi.org/project/ailie-net/
If you want to build the project yourself, or dig into the code to make your own changes, you can download the project and its code directly from my GitHub page.
This will provide you with the raw code, along with example code, to be used directly in your project.
Testing It Out
Supplied on GitHub is a range of test programs to verify your installation and to demonstrate what the library can do: https://github.com/RyanB-Micro/Ailie_Net/tree/main/Test_Examples. These projects range from basic pattern recognition to overly simplified chatbots and image classification.
In this section, we will view a couple of my favourite test examples, which you can download and experiment with.
Setting Up the Environment
Before we can use the library, we first ned an adequate environment to run the code. A virtual environment allows you to safely create a space for your project code and any dependencies, isolated from other projects, where differing versions of these dependencies can cause conflicts.
The environment tool I am using today is virtualenv, a free and popular choice for both Windows and Linux-based projects. To install virtualenv, you can run one of the following commands:
pip install virtualenvOr, on Linux:
apt-get install virtualenvWith the tools to create an environment, we now need a suitable location to make one in. We can do this by creating a new directory, named after the project. This directory will neatly house our project code and any experiments we wish to conduct. I will call this "ailie_test".
cd Desktop
mkdir ailie_test
cd ailie_testWe can now name and create our environment. I am calling it "ailie_env" to give it a distinguishable and descriptive name.
python -m venv ailie_test_envAn environment only needs to be created once. However, any time you want to start using your environment, you first need to activate it. This is similar to turning on your computer: It needs to be activated to use it and the tools within, and when you are done, you can turn it off.
On Linux, the environment can be activated using the following command:
source ailie_test_env/bin/activateOr, on Windows:
ailie_test_env\Scripts\activate.batInstalling the Dependencies
Even though the library itself only uses Python and NumPy, the example programs utilise a number of other useful tools that enhance the user's experience. These include Matplotlib, which effortlessly displays graphs and visualises our data, and Pandas, which stores and retrieves data in a database-style format.
To install these packages, the following commands can be used:
pip install numpy
pip install pandas
pip install matplotlib
pip install jsonWith the environment ready to use, you can now install the entire project, including the test scripts, onto your computer using the git clone tool:
If you don't want to build the project yourself, you can install a pre-made binary from PyPi and manually download the test folder from GitHub instead.
pip install ailie-netTesting the First Program - Left-Right Detector
The concept of this program is to create a simple mechanism that can detect the presence of an object and notify the robot of its location. This could be a very useful example for someone who wants to create a robot that uses sensors to navigate a room.
This works by the "robot" using three "sensors" to detect whether an object is within the detectable range. These detections are referred to as "pillars", such as an object with a vertical height large enough to generate a positive result, without being wide enough to accidentally trigger a neighbouring sensor.
[Object] [Object] [No Object]
o o o
\ | /
-------
/(robot)\
-----------
(Decision: Objects detected on Left Side)To begin, we need to move to where our test programs are stored, within our project's directory.
cd Ailie_Net
cd Test_ExamplesUsing the "python" command, we can now run our Python program for this example.
python single_layer_Left_Right_TEST.pyRunning the script will immediately define, generate and then train the neural network. The training loop prints out a range of results for the current accuracy for each training example.

The final results of the tracked error score are plotted upon completion of the training cycle. We can see that over time, the AI gradually improves its accuracy in detecting objects, with a near-zero error at the end of the 50th training iteration.

Advertisements provided by Google AdSense
Detecting Images - MNIST Example
In this example, the AI library is put to the test, tasked with differentiating between handwritten digits. This project utilises the MNIST Dataset, a popular training set originating from images digitised back in the 1950s.
Before we can run this program, we need a copy of the dataset. However, due to possible concerns over copyright/licensing, I have not included this dataset within my library. This dataset can be found freely and is available from many places, such as other AI library websites or on Kaggle, owned by Google. A CSV format is required.

With the two files included within the Test_Examples directory, we can now run the Python program and watch the magic at work!
python mnist.pyIn this program, we also get some upgrades from the test_utils.py file, such as the progress bar. Due to the size of the datasets (approximately 60,000 entries), the size of the test/train sets is considerable, and far too high to be displaying the error result of each for every training cycle that occurs.

As well as providing heightened clarity, and benefiting performance, the progress bar gives a clear indication of the progress for the training process without clogging up the screen with useless information. This way, the error for each output is just displayed at the end of the training iteration, but the user can still get a sense of the progress speed and be assured the program has not stalled.
After training has completed, the program displays the logged classification errors over time, visualising the changes in the error over the training period.

Another insightful feature of this test program is its ability to display the four worst-performing test images. Once training the neural network is completed, the program then evaluates the network's performance against unseen data samples, giving a better prediction of how it would act in the real world.
During this process, the neural network runs through a pass of the test dataset, containing wholly unseen data not utilised during the training process. The four worst-performing data samples are visualised on the screen for visual inspection by the user.

At a squint, it can become understandable where the network has made the wrong prediction. For example, in the bottom left image, the horizontal line is at the bottom of the figure, with a large anti-clockwise curve visible above, resembling the main characteristics of the number 2. Similarly, in the bottom right image, the represented number 4 is highly disfigured, mostly featuring a slanted vertical line with a mass at the top, which shares many significant characteristics with a number 1 with a head at the top.
Altogether, this test example offers users a basic template for experimenting with different quantities of neurons, neuron layers, and types of activation functions to understand how these properties can affect image classification problems.



