top of page

A Friendly Introduction to the Digital Neuron

Writer: The FounderThe Founder

Updated: Mar 9

Introduction

Over the next series of blogs, I want to introduce you to the concept of neural networks from an easy-going, non-maths overwhelming perspective. This includes topics such as what a digital neuron is, how they work, and what kind of fun stuff we can make them do.


Table of Contents


The Digital Brain

Brains are typically portrayed as being mysterious and immensely complex. This is because they usually are. However, my aim is to show you how experimenting with simple “brains” easy enough to make yourself can allow us to do some very interesting things without being too complicated at all.

 

Here is a digital “brain,” a group of “Neurons” working together to achieve some goal. It looks cool, right? Well, it probably just looks like a bunch of circles and lines, but this architecture has a lot of potential to do some great work!

The truth is, this complex brain is mostly the same bunch of simple equations replicated many times. Additionally, neural networks don’t “think” like we do. Our brains act as a universal problem-solving machines, able to swap between a vast range of tasks and tackle problems we have never seen before. Instead, neural networks are generally designed to perform a specific job, such as detecting patterns or replicating trends.


So, how do we turn these magic “circles and lines” into a tangible reality we can actually use?


The Digital Neuron

A digital neuron is the smallest fundamental part of a neural network, represented as a single node within this “net”. Today, we will be investigating the single neuron in both biological and digital forms, what they do, and how they work. Once we have explored the single neuron, we will then have the tools to progress into multiple-neuron experiments in later blogs.


This is a biological neuron:


And here is a digital neuron:


What have they actually got in common?


Breaking Down the Neuron

In its simplest form, a neuron does three main things: It takes an input, processes that input, and generates a response. The inputs could be information from external sensory devices or other neurons, with the outputs going to other neurons or other external devices.


Receiving Inputs

The biological neuron receives inputs from little hair-like structures called dendrites. The dendrites receive chemical signals from other neurons, which they then translate into electrical signals for use within the neuron.


The connections between biological neurons are known as synapses, extremely small gaps between the paired inputs and outputs of neurons. To send signals through these synapses, neurons release charged ions, known as neurotransmitters, as chemical signals that can physically cross the gaps between the neurons. Once received by the dendrite, the exchange of these charged ions physically affects the overall charge within the dendrite, causing a measurable electrical change that the neuron can act on.


The process is a little complex, but to put it simply, chemical signals are sent between neurons, which creates electrical signals to be used inside the neurons.


However, this process is vastly more straightforward with digital neurons, where digital values transferred directly between neurons within code. For the examples on this page, digital neurons will receive and transmit binary on/off signals, represented numerically as 1s and 0s.


Processing Inputs

Once signals are received by a neuron, they have to be processed. This processing, is what is responsible for the decision-making characteristics of the cell.


In the biological world, a process called “synaptic Integration” occurs where the signals from the various dendrites are combined within the cell body and then processed as a whole. If the summed signal becomes greater than a specific threshold within the neuron, the neuron outputs an electrical response to be sent across to the next batch of neurons. These input signals can be of varying strengths, providing different levels of influence on the neurons response, dependent on factors such as the size, location, and quantity of synapses with the connected cell.


Similarly, the digital neuron also combines the individual inputs, weighted by how influential each input is to the specific process. Likewise, if a certain threshold is met, a response will be sent to the next set of neurons.

 

Outputting a Response

Once the signals have been processed, the neurons output this internal reaction to the outside world. Without this, the neuron won’t be able to influence anything with its decision.


When the combined voltages of the neuron's inputs pass an internal threshold, the biological neuron decides to fire, initiating an electrical impulse known as an “action potential” which triggers the axon terminals to release neurotransmitters. As mentioned before, these chemicals can physically jump the gap between the neuron's axon terminal and the dendrites of the other connected neurons. The passing of this chemical signal then triggers an electrical response in the next neuron's dendrite, and so on.


Within a digital neuron, an “activation function” controls the neuron's response, such as determining the output's strength. In these examples, a simple threshold function decides whether a signal is sent or not, producing a digital on/off response. However, there are many alternative activation functions to choose from, with vastly different analogue and digital behaviours.

 

Besides the jargon, it's pretty simple, right? Various signals of various importance come in, get combined, and then checked to see how large they are in order to generate an appropriate response. Let's take a closer look at it with a simplified walkthrough.


The Walkthrough

Let's pretend we have some kind of super simplistic camera providing an image to a robot's brain. We could then use this basic “brain” to detect and act upon objects within its environment. To keep it simple, let's say the camera only allows values of black and white, where white equals the pixel is off (binary 0) and black indicates the pixel is on (binary 1). This way, a detected object will be received as a black silhouette in front of a white background.


Take the grid below. These four squares can represent the four pixels that our super simple camera may have. Although there are not a lot of pixels to work with, it's enough to run some simple experiments without overcomplicating the problem.

 

Using this black-and-white premise, a line detected within the two right-most pixels can be represented as:


Mapped into binary, scanning the image starting with the top row, from the left to right, this can be stored in memory as the following list:


pixels = [0, 1, 0, 1]

These pixel values will be the inputs to our network, even if the network is just one lone neuron.

Though nothing special, making a neuron able to detect this pattern could be the first step towards detecting walls, obstacles and other objects within its environment.


Collecting the Inputs

Each input to the neuron provides some level of influence on deciding the output. For this reason, each input has to be taken into consideration. Now that the image has been scanned, this data can be stored within a list and provided to the neuron for analysis.



Combining the Inputs

The signals from the dendrites are combined within the “soma,” the cell body of the biological neuron. This collection of inputs could be modelled as a simple summation of our pixel values.

In the programming language Python, there are a number of ways to sum a list of numbers. The easiest method to learn is to loop through each number stored in a list, adding the value to our total in turn.


A “for loop” can be used to individually pick out each item in our list. As a value gets selected, we can store this retrieved value under a placeholder name, such as “value”. This will make the code more readable for beginners, rather than dealing with numerically indexing the list, or worrying about how long this list is.

# Sample image from the camera
pixels = [0,1, 0,1]
# Loop through each input and sum the values
for value in pixels:
	# Perform the calculation

To calculate the total, we can use the following expression:

total = total + value

This instruction combines the current total with the retrieved pixel value. Once it has completed this, it updates the “total” variable with this new calculated value. Or in other words:

new_total = old_total + input_value

 

This process can be stored within a “function”, a useful chunk of code we may want to use many times in various parts of a larger program, stored neatly within its own container.


By providing this function name, we can “call” the function to trigger it. Additionally, when calling a function, you can include parameters we want it to work with. The process of calling a function allows us to use the same algorithm in multiple places within a program, rather than having to re-type in each place it needs to be used.


A function can also respond by “returning” a value back to where it was called. For example, we can create a function to sum all of the inputs and return the result we can store into a variable.


Below is a prototype of such a function, defined with the name "sum_inputs", and expects a list of values to sum, which it will designate the alias of "inputs in".

# Function to combine the pixel value inputs
# The function takes a list of inputs that it will refer to as “inputs_in”
def sum_inputs(inputs_in):
	# Initialise variable to store the result
	total = 0

	# Loop through each input and sum the values
	for value in inputs_in:
		total = total + value

	# return the summed total
	return total

This simple function can be tested by calling it with a list of test values. The response can then be displayed on the screen using a “print” command.

# Sample image from the camera
pixels = [0,1, 0,1]
# Call the function using the pixel inputs
response = sum_inputs(pixels)
# Display the response on the screen
print("Sum of pixels:", response)

 

However, this code has one major flaw. It can tell us how many pixels are active, but it does not tell us what pixels are active. In order to detect our pattern, we need our neuron to differentiate between if just two pixels are active, or whether the two pixels of our specific pattern are active.



Counting the number of active pixels currently does not provide us with any useful information, so context needs to be added to these inputs. As mentioned before, each input contributes some level of influence on the output. However, the level of influence of each input is individually weighted. To model this, we can add a list storing the weight vectors associated with each input.

 

The Strength of Inputs

The status of one pixel may be more significant compared to others. The more significant the value of a pixel may be, the more influence we want it to enact. Alternatively, if a pixel offers little significance, then we want to suppress this value more.


Biological neurons can achieve this by building better or further connections with neurons that offer a valuable input, and less of a connection with those that don’t. For example, dendrites can become quite long, and connections closer (or even on) the cell body are much stronger than those far away. Additionally, a larger synapse or multiple synapses with the same cell can also boost the strength of an input over that of others.


For digital neurons, we can attempt to mirror this using a “weight vector”. This value can be multiplied to an input to either increase or decrease its chance of causing a positive response.

For example, if we want an input signal to be strong, we want as much of that original signal to pass through as possible unaltered. To do this, we need our multiplied weight value to be very close to the number one: Number in * 1 = Number in. This will apply to the two rightmost pixels, which we are trying to detect in the 2nd and 4th bits.

weights = [ x, 1, x, 1]

However, the more irrelevant the value of a specific pixel may be, the more we need to suppress it. The closer the multiplied weight value is to zero, the more the input will be reduced: Number in * 0 = 0. We want to limit how much the two leftmost pixels contribute to a positive detection result so we can set their weight values to zero.

weights = [0, 1, 0, 1]

To apply this weighting, we need to multiply these values with the inputs during the process of calculating our sum. This combined process is known as a “weighted sum”, and builds upon the code we produced before.


In Python, we can use the “zip” command to combine two lists effortlessly. This essentially "unzips" each list, one item at a time, pairing it with the associated item in the other list. Similar to last time, we can give these retrieved values placeholder names to refer to them by.

# A function to calculate the weighted sum of the inputs.
# Multiplies each input by its associated weight and sums the result.
def weighted_sum(inputs_in, weights_in):
    # Initialise the variable to store the result
    total = 0

    # Pair each input to its associated weight
    for (value, weight) in zip(inputs_in, weights_in):
        # Update the total, adding the latest weighted sum
        total = total + (value * weight)

    # Print the calculated total
    print ("total = ", total)
    # Return the total to what is called the function
    return total

We can test this function by creating lists of test inputs and test weights. By changing the values in these variables, we can experiment to see how our algorithm responds.

# A list to store the pixel values
pixels = [0, 1,
          0, 1]
# A list to store the weight values
weights = [0, 1,
           0, 1]

 

These lists can be passed into the function call, with the response printed on the screen.

# Calculate the weighted sum of the pixel inputs
response = weighted_sum(pixels, weights)

# Display the result
print("Weighted Sum of pixels:", response)

 

Experiment with different weight and pixel values to see how this will affect detecting various patterns.


By providing the function with the following test “Images”, we can see that the neuron can now react differently depending on what combination of pixels are turned on.


From this quick test, we can see that the weighted sum only produces the result of 2 when the two left most pixels are on, and is undisturbed for whether any additional pixels are on or off. This shows we are close to detecting our chosen pattern already.


Generating the Output

With the inputs combined, our neuron is now ready to make a decision and produce its output.


A biological neuron has a slightly negative resting voltage by default. This resting voltage of the neuron's cell membrane is typically around -70 millivolts. As the dendrites receive signals, this membrane potential “depolarizes” as its charge increases.


Once the membrane potential reaches -55 millivolts, the neuron has reached the threshold for triggering the voltage-gated sodium channels at the axon hillock, the threshold between the cell body and its axon tail. When these sodium channels open, a vast amount of positively charged sodium ions flood into the neuron, causing an intense reaction called an action potential that fires the output along the axon and through the axon terminals, triggering the release of neurotransmitters across the synapses.


Though complex biologically, we can achieve this quite simply using an “if statement”. An if statement is used to compare a value to some test condition, if this test condition becomes true, then the if statement will trigger its encapsulated code.


Try the following code on your computer, changing the values of number_1 and number_2 to experiment with how the if statement works.

# Test values to compare
number_1 = 10
number_2 = 5
# Check if number is greater than number 2
if number_1 > number_2:
    # Run code inside the statement if true
    print("Number 1 IS greater than number 2")
else:
    # Run code outside of statement if false
    print("Number 1 IS NOT greater than number 2")

 

To mimic this threshold for our neuron we can create a “step function”. A step function will check if the weighted sum has reached a numerical threshold and then trigger a positive output. If however, the threshold has not been reached, the output stays at zero.


This function can take the total that was calculated earlier and compare this result to a provided threshold value:

# A simple "step" activation function.
# Returns true of the value reaches the threshold.
def step(total_in, threshold_in):
    # Checks if the total has reached the threshold
    if total_in >= threshold_in:
        # Returns true if threshold reached
        return True
    else:
        # Returns false if threshold not reached
        return False

This can be tested using the following code:

# Two test values to verify the step function
Num_A = 10
Num_B = 2

# Test the step function using the two test values
print("IS A greater than B?", step(Num_A, Num_B))

Putting it Together

Using the weighted sum and step functions produced before, we can combine these into one function that performs the whole neuron process.

# A simplistic function representing a digital neuron.
# Checks whether the weighted sum of the inputs reaches a set threshold.
def neuron(inputs_in, weights_in, threshold_in):
    # Calls the function to calculate the weighted sum
    sum_weighted = weighted_sum(inputs_in, weights_in)
    # Calls the function to apply the step function
    sum_stepped = step(sum_weighted, threshold_in)
    # Return the results of the neurons decision
    return sum_stepped

To use this neuron, we first need to establish our sample “image” pixel values, the weights that fine-tune our neuron, and the threshold to compare against.

# A sample "image" to provide to the neuron
pixels = [1,1,
          1,1]
# The selected weight values for the neuron
weights = [0, 1,
           0, 1]
# The selected threshold value for the neuron
threshold = 2

We can now call the neuron function using these test values and display the result on the screen.

# Get an answer from the neuron for the provided image
answer = neuron(pixels, weights, threshold)
# Display the results of the neuron's answer
print(answer)

If successful, the neuron should return the value of “true” to signify that the line has been detected.

 

Creating a Model

A model typically refers to the neural network architecture that we can use to accomplish our task.


We have successfully created one neuron, but what if we want multiple? Creating variables to store the individual weighted sums and results of a vast number of neurons and copying and pasting the same function calls for each in the network will be extremely tedious and time-consuming.


A solution to this issue would be to create a template of the neuron in which we build our network. In Python, we can make a neuron “class” to form this template that all neurons are based upon, with individual neurons created by making instances of this class, known as objects.

 

For example, let's say we wanted to make a template for a dog. Well, we can create a class named “Dog” (classes are commonly named to start with a capital letter) and provide it with the ability to initialise itself with some default values.

# Create a template for a "dog"
class Dog:
	# Things within Dog go inside here

If you are referencing a variable or function within the class itself, you can specify this using the “self.” command, as this will distinguish between variables and functions inside or outside of the class. The period signifies to the computer we want to use a property within the item we are referencing, in this case, a variable called "sound" that belongs within the class itself. Cleverly, when initialising a class, if we assign a value to a class variable that does not exist, it will create that variable for us.

# Create a template for a "dog"
class Dog:
    # Initiate the class with default values
    def __init__(self):
        self.sound = "Bark"

To provide our class some functionality, we can create a “method”, a function that belongs and can only be used within a specific class. This method will provide our “dog” the ability to make sound.

# Create a template for a "dog"
class Dog:
    # Initiate the class with default values
    def __init__(self):
        self.sound = "Bark"

    # Create a "method" for our class
    def make_sound(self):
        print(self.sound)

We can make an object, to create a named instance of this class.

# Create an instance of the Dog Class
Jake = Dog()

To trigger a method within the class, we can type the name of the object we want to use and then the name of the intended method, separating by a period. This command essentially says to the computer: trigger the method “make_sound” that belongs to the object “Jake.”

Jake.make_sound()

The output should be the contents stored within the object's “sound” variable. We can also change the internal values of objects independently from other objects from the same class. This can be done by specifying the variable of that object and following with the new value.

Jake.sound = "Bark Bark"
Jake.make_sound()

This should now return the response of “Bark Bark”.

 

With the concept of classes and objects established, we can now create a class for our neurons. This combines the variables and functions we have used separately into one neat package.

class Digi_Neuron:
    def __init__(self, weights_in, threshold_in):
        self.weights = weights_in
        self.threshold = threshold_in

    def combine_inputs(self, inputs_in):
        self.weighted_sum = 0
        for (value, weight) in zip(inputs_in, self.weights):
            self.weighted_sum += (value * weight) 

    def activation_function(self):
        return self.weighted_sum >= self.threshold

Testing the model

To test our model, we can first create an object and pass it the initial setup values.

# Test image of a line
test_image = [0, 1,
              0, 1]
# The weights tuned for our neuron
weights = [0, 1, 0, 1]
# Create an instance of the digital neuron
Line_Detector = Digi_Neuron(weights, 2)

Next, we can combine the inputs by performing a weighted sum and, finally, generate a response using our step activation function.

# Combine the inputs to the neuron
Line_Detector.combine_inputs(test_image)

# Generate a response from the neuron
response = Line_Detector.activation_function()
print("Line Detected?", response)

If done successfully, we will get the answer for true when we provide it with our test image. But what about other inputs? Give it a try!

 

Simplifying the Neuron

The process of getting the inputs all the way to generating the response is known as a “forward pass” through the neuron. Knowing this, we can simplify the way we use our neurons by making a forward pass method. This way, it would be easier to generate an input for not just this neuron but others, too. This change will eliminate the need to manual tell each neuron to perform a weighted sum to then manually tell them to perform the activation function.

class Digi_Neuron:
    # Initialise an object with the provided values
    def __init__(self, weights_in, threshold_in):
        self.weights = weights_in
        self.threshold = threshold_in

    # Combine the inputs using a weighted sum
    def combine_inputs(self, inputs_in):
        self.weighted_sum = 0
        for (value, weight) in zip(inputs_in, self.weights):
            self.weighted_sum += (value * weight) 

    # Generate a response using a step function
    def activation_function(self):
        return self.weighted_sum >= self.threshold

    # Fuffill the input to output processes of the neuron
    def forward_pass(self, inputs_in):
        self.combine_inputs(inputs_in)
        return self.activation_function()

With the improved class created, we can now create our objects and provide our weight vectors.

# Test image of a left wall
test_image_1 = [1, 0,
                1, 0]
# Test image of a right wall
test_image_2 = [0, 1,
                0, 1]
# Create a neuron to detect left walls
left_weights = [1, 0, 1, 0]
Left_Detector = Digi_Neuron(left_weights, 2)
# Create a neuron to detect right walls
right_weights = [0, 1, 0, 1]
Right_Detector = Digi_Neuron(right_weights, 2)

Triggering the end-to-end process, and generating a response from the neuron using provided pixel data can now be accomplished with one line, far easier than before.

# Test the two neurons with an image of a left wall
left_answer = Left_Detector.forward_pass(test_image_1)
right_answer = Right_Detector.forward_pass(test_image_1)
print("Testing with ""left wall"" image")
print("Left Detector = ", left_answer)
print("Right Detector = ", right_answer)
# Test the two neurons with an image of a right wall
left_answer = Left_Detector.forward_pass(test_image_2)
right_answer = Right_Detector.forward_pass(test_image_2)
print("\nTesting with ""right wall"" image")
print("Left Detector = ", left_answer)
print("Right Detector = ", right_answer)

From this test, the left detector should detect the line in the first, where the right detector wont. Similarly, the right detector should detect the line in the second image, where the left detector should not.


Going Further

We know that our neurons can detect lines in our images, using input combinations we expect these results to come from. But what about other input combinations of patterns we wouldn’t expect our neuron to come across?


The benefit of using a neural network to detect a pattern over a bunch of if statements, is that a sufficiently tuned neural network can “generalise” a problem. This essentially means that, like humans, the neuron can understand the problem enough that it can generate reasonable responses to situations it was not expecting or has been shown before.


Unfortunately, our neuron is not quite there yet, with some questionable results for specific input configurations. Do some experiments to see if you can find what these are. We will address this next time with the introduction of inhibitory responses and biases.


Closing Summary

A biological neuron receives signals through tree-like structures called dendrites that can connect to other neurons and form a network. The connections between neurons are known as synapses and allow chemical signals called neurotransmitters to be physically exchanged between neurons.


The chemical signals received by the dendrites are converted into electrical signals that alter the cell membrane's electrical potential, a process known as depolarization. The size, quantity and location of synapses can affect the strength of the input signals.


The various input signals are combined at the axon hillock in a process called synaptic integration. If the summed inputs alter the membrane potential above a specific threshold, voltage-gated sodium channels trigger a rapid response by the neuron called an action potential. This action potential creates a strong output that is transferred along the cell's axon and through the axon terminals onto synapses with other connected neurons.


Digital neurons mimic this behaviour by combining digital numeric inputs by summing them. The strength of the signals is controlled by weight vectors that enhance or suppress the incoming signals. The technique of weighting and summing the inputs is known as a weighted sum. An activation function dictates the output of a digital neuron, where a simplistic activation function known as a "step function" can verify whether the weighted sum total has reached a set threshold and output an appropriate response.


A simplistic representation of the characteristics of the biological neuron can be described by the code below:

# A simplified biological neuron represented in Python code
class Digi_Neuron:
    # Initialise an object with the provided values
    def __init__(self, weights_in, threshold_in):
        self.weights = weights_in
        self.threshold = threshold_in

    # Combine the inputs using a weighted sum
    def synaptic_integration(self, inputs_in):
        self.weighted_sum = 0
        for (value, weight) in zip(inputs_in, self.weights):
            self.weighted_sum += (value * weight) 

    # Generate a response using a step function
    def axon_hillock(self):
        return self.weighted_sum >= self.threshold

    # Fuffill the full input-to-output processes of the neuron
    def forward_pass(self, inputs_in):
        self.synaptic_integration(inputs_in)
        return self.axon_hillock()

Viewer Feedback

Reader feedback is a critical part of ensuring the content displayed on Beds Micros fulfils our users' needs. Feel free to let us know how we did and where we could improve to better your experience.


Quality of Understanding

  • 0%Too Advanced (Lacking explanation, not broken down enough)

  • 0%About Right (Enough explanation, appropriate difficulty)

  • 0%Too Simple (Not enough information, too little examples)


Areas of Improvement

  • 0%Quality of Explanations (Little background, few examples)

  • 0%Quality of Images (Too little quantity, relevance)

  • 0%Quality of Code (Too complex, hard to follow)

  • 0%Quality of Coverage (Not thorough enough, topic diversity)


*This information will be anonymised - Your name and other details won't be visible to other readers.


Bibliography

[1]

Neuroscientifically Challenged, 2-Minute Neuroscience: Action Potential, (Jul. 26, 2014). Accessed: Jan. 29, 2025. [Online Video]. Available: https://www.youtube.com/watch?v=W2hHt_PXe5o

[2]

Neuroscientifically Challenged, 2-Minute Neuroscience: Divisions of the Nervous System, (Aug. 08, 2014). Accessed: Feb. 02, 2025. [Online Video]. Available: https://www.youtube.com/watch?v=q3OITaAZLNc

[3]

Neuroscientifically Challenged, 2-Minute Neuroscience: Membrane Potential, (Jul. 25, 2014). Accessed: Jan. 28, 2025. [Online Video]. Available: https://www.youtube.com/watch?v=tIzF2tWy6KI

[4]

Neuroscientifically Challenged, 2-Minute Neuroscience: Synaptic Transmission, (Jul. 22, 2014). Accessed: Jan. 28, 2025. [Online Video]. Available: https://www.youtube.com/watch?v=WhowH0kb7n0

[5]

Neuroscientifically Challenged, 2-Minute Neuroscience: The Neuron, (Jul. 22, 2014). Accessed: Jan. 28, 2025. [Online Video]. Available: https://www.youtube.com/watch?v=6qS83wD29PY

[6]

Neuroscientifically Challenged, 10-Minute Neuroscience: Neurons, (May 14, 2023). Accessed: Feb. 02, 2025. [Online Video]. Available: https://www.youtube.com/watch?v=5p9ucgRDie8

[7]

Harvard Online, How a synapse works, (Apr. 19, 2017). Accessed: Jan. 29, 2025. [Online Video]. Available: https://www.youtube.com/watch?v=OvVl8rOEncE

[8]

F. Amthor PhD, Neuroscience for Dummies, 3rd ed. in for Dummies. John Wiley & Sons, Inc, 2023.

[9]

Daniel Kochli, PSY210 Ch2 Pt6: Synaptic Integration, (Aug. 28, 2020). Accessed: Feb. 02, 2025. [Online Video]. Available: https://www.youtube.com/watch?v=Dw4d6zoWl9Q

[10]

BioME, Temporal vs. Spatial Summation, (Jul. 30, 2020). Accessed: Feb. 02, 2025. [Online Video]. Available: https://www.youtube.com/watch?v=KQOM_sXBtbw

[11]

Bing Wen Brunton, Types of Synapses, (May 26, 2023). Accessed: Feb. 03, 2025. [Online Video]. Available: https://www.youtube.com/watch?v=m4mNqY9iseE


Image Sources

All third-party images are sourced using a Canva Pro Licence and exported from a custom Canva Design.


 
 

Powered and Secured by Wix

bottom of page