Within the field of computational intelligence, evolutionary algorithms and neural networks are two frameworks that aim to advance the goal of optimization. Evolutionary algorithms emulate structures found in nature, while neural networks approximate the structure of the human brain to identify patterns. This project combines these frameworks by using an evolutionary algorithm to select the architecture of a neural network. The resulting architecture is compared against two other algorithms: a fully connected neural network and a convolutional neural network. The Scikit-Learn Digits dataset is used to train and evaluate the models, with the goal of identifying the best-performing algorithm on the test set.
A convolutional neural network (CNN) was implemented in PyTorch with the following general architecture:
1. A 2D convolutional layer followed by an activation function and pooling.
2. Flattening of the output.
3. A linear layer with another activation function.
4. A final linear layer followed by a log softmax function.
The EA-CNN achieved a best fitness score of 0.029 after 25 generations.
This project explored the integration of evolutionary algorithms with neural networks for architecture optimization. While the EA-CNN demonstrated efficient initial optimization, its performance lagged behind standard CNNs and FCNNs. Future work could address computational constraints by increasing generations and epochs or incorporating parallel processing to explore more configurations.