• Question: How are nerual networks made?

    Asked by anon-379407 on 19 Jan 2024.
    • Photo: Carl Peter Robinson

      Carl Peter Robinson answered on 19 Jan 2024: last edited 19 Jan 2024 4:27 pm


      That’s an inquisitive question! Historically, neural network theory began in the 1940s and 1950s with the work of Walter Pitts and Warren McCulloch, and then Donald Hebb. Later Frank Rosenblatt created the perceptron which gave us a mathematical model to mimic how a neuron in the brain might work. It’s not really how organic neurons work, but it provided a representation that enabled the later creation of multiple-layer networks, very similar to what we use today (e.g., deep learning).

      So, I used the word “mathematical”, because that’s what a neural network is based on: mathematical equations that are converted into computer algorithms using a programming language such as Python. If you search online for an image of a basic neural network, you’ll see something with lots of columns of circles with lines connecting one column of circles to the next column of circles. These circles represent the neurons and lines to the left of them represent the inputs going to each neuron, while the lines to the right represent the output of each neuron.

      The inputs to each neuron are summed (added together) and then passed through a function to determine their output. This is represented in actual terms by mathematical equations and corresponding code, for example using Python. This enables you to feed the network with some input data and get a result outputted at the other end.

      The code behind a neural network consists of several functions that each perform a specific role as part of the network training phase. Inside each function will be things like loops and arrays that perform mathematical operations on the network’s values and store updated values. This is all done in a number of cycles, in order to get the best possible output value. I hope that kind of made sense and note that this is just one description of a basic neural network.

      If you’re feeling brave and would like to look up more information online, I encourage you to search for backpropagation and gradient descent: two of the main mathematical processes used inside a neural network. Think of these processes as like a situation in a restaurant where the chef is trying to please a difficult customer with a bowl of soup. The chef prepares the soup and a waiter takes the soup forward into the dining area, to the customer. But when the customer tries it, they say they don’t like the soup. So, the waiter takes the soup back to the chef in the kitchen. The chef adds some more salt and the waiter takes it forward to the customer in the dining area. But again, the customer tries it and does not like the soup. The waiter takes it back to the kitchen and the chef this time adds some cream. The waiter takes it forward to the dining area and the customer tries it, this time saying they like it!

      What a difficult customer! The chef had to find a way to optimise the taste of the soup for them before the customer would accept it. This is kind of what a neural network is doing, optimising an output by modifying the inputs.

    • Photo: Fraser Smith

      Fraser Smith answered on 19 Jan 2024:


      Neural networks are made by people using coding. There have been huge advances over the last 50-60 years leading to things like chatGPT nowadays. There are key features of neural networks (the units and how they work) that very loosely resemble how real neurons work in the brain – it is important to realize that how the brain works provided key inspiration for the development of these really powerful AI systems.
      Our brain is essentially an extremely complicated neural network!

Comments