Time-efficient offloading and execution of machine learning tasks between embedded systems and fog nodes
MetadataShow full item record
As embedded systems become more prominent in society, it is important that the technologies that run on them must be used efficiently. One such technology is the Neural Network (NN). NN's, combined with the Internet of Things (IoT), can utilize the massive amounts of data produced to optimize, control, and automate embedded systems, giving them more functionality than ever before. However, the status quo of offloading all NN functionality onto external devices has many flaws. It forces the embedded system to completely rely on networks which may have high latency or connection issues. Networks may also expose them to security risks. To reduce the reliance of IoT devices on networks, we examined several solutions such as delegating some NN's to run solely on the IoT device or splitting the NN and distributing the subnetworks into different devices. It was found that, for shallow NN's, the IoT device itself could run the NN at a rate faster than offloading it to an external device, but the IoT device needed to offload its inputs once the NN's started to increase in layers and complexity. When splitting the NN, it was found that the number of messages sent between devices could be reduced by up to 97% while only reducing the accuracy of the NN by 3%.