top of page
Midway Tutors

Neural Networks and Their Applications




Nathnael Bekele


Nowadays, computer science is so advanced that we can mimic biological processes that we see in nature. This includes the human brain. At least on a surface level, computers are able to act in a way that resembles the human brain using neural networks. Neural networks have artificial neurons that take inputs such as pixel darkness or x values for a plot. They apply weights and biases to them to see if the input can pass through a threshold activating another neuron. New weights and biases are applied to the value found from the activation to predict an output. The predictions made from pixel darknesses can be the identification of characters in handwritten letters or the labeling of images. In the case of x values, neural networks can be used to make classifications. This includes taking values such as cancerous tissue size and predicting fatality. This process is called forward propagation.


This mimics the brain because neurons activate other neurons when the electric potential is high enough. In neural networks, instead of looking at whether there is enough electrical potential, we pass the input value through an activation function such as the sigmoid or the RELU functions to see if the values are large enough to contribute to the output. These activation functions are chosen by the data scientists/statistician as fitting the project. The functions aim to minimize the effect of inputs that carry less weight and maximize the effect of inputs with larger weights in making predictions.


Once a prediction is made, we compare it to the real outcome like the character from a handwritten letter identified by a human or the fatality of different sizes of cancerous tissues from past patients. By looking at how far off the prediction is from the actual outcome, we make small changes to the weights and biases in the neural network that decrease the error. This process is called back-propagation. By back-propagating multiple times, we find the best weights and biases for our neural network. These can be different depending on the activation function. This effectively trains the network to make the best predictions.


The layout of a neural network is as shown below:



We apply the weights and biases to the values in the input layer. The result is then passed through an activation function in the hidden layer. After the values go through the activation function, new weights and biases are applied to them giving us the outcome for the output layer.


As aforementioned, one use of neural networks is classification. In biology, they can be used to classify cells by taking in image inputs. Kusumoto and Yuasa state that a neural network “enables the automation of identifying cell types from phase contrast microscope images without molecular labeling, which will be applied to several researches and medical science.” By training the network using already marked images, it can be used to accurately and autonomously classify unmarked images of cells. The same is done in neuroscience where neural networks are used to identify different abnormalities in brain scans. Since neural networks are a technique for applying machine learning, they can also be used in protein structure prediction and gene editing as discussed in previous blogs (Darsey).


Sources:


Darsey, Jerry A et al. “Architecture and biological applications of artificial neural networks: a tuberculosis perspective.” Methods in molecular biology (Clifton, N.J.) vol. 1260 (2015): 269-83. doi:10.1007/978-1-4939-2239-0_17


Kusumoto, Dai, and Shinsuke Yuasa. “The Application of Convolutional Neural Network to Stem Cell Biology - Inflammation and Regeneration.” BioMed Central, BioMed Central, 5 July 2019, inflammregen.biomedcentral.com/articles/10.1186/s41232-019-0103-3.




15 views0 comments

Recent Posts

See All

Comments


bottom of page