Enhancing HVAC Control Systems through Transfer Learning with Deep Reinforcement Learning Agents

By Kevlyn Kadamala, Des Chambers and Enda Barrett in Journal Papers

January 26, 2024

Natural Language Processing (NLP) systems have, over the past decade, shifted from using rule-based techniques to using machine learning-based algorithms. This has led to the development of different architectures and models for different tasks. Some of these architectures include models like the transformers, the CNN and the RNN, which have now become ubiquitous in NLP. However, designing these neural network architectures usually requires in-depth analysis and knowledge of multiple domain areas involved with the problem at hand. In our work, we evaluate an alternative solution to this problem in the domain of text classification. Here, we suggest using the Genetic Algorithm with gradient descent (GAGD) and NeuroEvolution of Augmenting Topologies (NEAT) to search for an optimal neural architecture for the Reuters-21578 and 20 Newsgroups datasets. We evaluate and compare the results of the two algorithms against the current state-of-the-art architectures and provide insight into their performance.

Cite:

@article{kadamala2023insight,
  title={An Insight into NeuroEvolution and Genetic Algorithms for Text Classification},
  author={Kadamala, Kevlyn and Griffith, Josephine},
  journal={Procedia Computer Science},
  volume={225},
  pages={1379--1387},
  year={2023},
  publisher={Elsevier}
}
Posted on:
January 26, 2024
Length:
1 minute read, 171 words
Categories:
Journal Papers
See Also: