Today we’re featuring Toradex, one of our hardware partners who will be exhibiting at Arm TechCon this week in San Jose, California. If you’re at the conference, come visit booth #1134 to see a joint demonstration of Xnor running on Toradex’s efficient Arm system-on-modules. On a single board we have been able to get Xnor object-detection models running real-time on three cameras.

ai at the edge | xnor

Apalis iMX8 — Toradex’s Computer on Module with NXP i.MX 8 SoC

Toradex is the preferred computing solution provider for low to medium volume projects in the embedded industry, enabling customers with fast time to market and delivering low total cost of ownership. With over 3,000 customers including World Cup Rally car racing, Toradex products are designed to run 24/7 in critical applications and withstand harsh environments with extreme temperature ranges, high vibration and high humidity.

AI at the Edge

ai at the edge | xnor

AI-enhanced building security smart enough to recognize authorized personnel, differentiate between people, animals, and machinery — and track movement across cameras.

Our collaboration unlocks usage scenarios requiring edge AI tasks on resource-constrained and low-power hardware which were previously only available in the cloud or on specialized hardware:

Commercial Security: Identify authorized and unauthorized people based on face identification. Track package delivery; differentiate between intruders, pets, and trusted individuals.

Retail Analytics: Analyze flow of foot traffic, heat-map analysis, and customer sentiment

Manufacturing: Inspect output for consistency and quality control; send alerts when anomalous actions or behavior occur on production floor.

Reducing Latency, Power Consumption & Downtime

Together, Xnor and Toradex are enabling AI on edge devices where technology and the real world intersect, providing a fast, resilient, and autonomous solution that works without interruption. Many of the customer conversations we have are about their concerns about the downside of relying too heavily on the cloud for AI tasks.

The rationale we often hear is that dependencies on cloud connectivity increases latency and power consumption, reduces overall performance and increases the risk of downtime because of network and cloud outages — none of which are acceptable in mission-critical monitoring solutions where personal safety and property are at stake. Xnor running on edge hardware like Toradex enables tasks to continue running since they run on device — even advanced tasks like face identification running on very small models.

Eliminating Hidden Cloud Costs

ai at the edge | xnor

Additionally, executing real-time AI tasks on-device eliminates the cost of using an external cloud solution. The cost of utilizing cloud AI quickly adds up — even 30 minutes of daily cloud vision services can cost over $350/month and requires 1.4 terabytes per year of network throughput per camera.

Come visit the booth to see this in action. If you’re not at the conference stay tuned for updates where we will share more. Thanks!

ai at the edge | xnor

Come see Xnor on Toradex in action at Arm TechCon 2018!

Visit to learn more.

At we work on every aspect of computing platforms to optimize artificial intelligence and machine learning, from the software down to the hardware. We have a diverse set of skills, so it is easy to quickly build a prototype for an end-to-end project. Saman Naderiparizi, PhD, is an hardware engineer and is here to share an example of the types of problems our team solves.

Here he describes one of our projects showing how we were able to take a Raspberry Pi Zero and turn it into a real-time edge AI device:

Before joining Xnor I was an electrical engineering PhD student at the University of Washington working on two core projects: developing cameras that harvest energy from Radio Signals such as WiFi, and streaming HD video without the need for any batteries. Leveraging these skills, I develop low-power hardware platforms to run deep neural network models.


The Raspberry Pi Zero powering our object classifier and emotion detection.


To demonstrate’s efficient machine learning models I want to show you what we humorously call our “Thing Detector”. It is a battery-powered device running on a $5 Raspberry Pi Zero that utilizes the Raspberry Pi camera to do complex and accurate object classification and emotion detection in real-time, while using a fraction of the compute capability present in desktop computers and cloud servers.


The xnorized models can recognize 80 object classes and detect emotions from facial expressions.


An image is fed to the Pi 0 and an inference is made. For object classification, if it sees a person, it chirps “person”. It can also infer human emotions such as happy, sad, angry, and scared. This is made possible with Xnor’s efficient binarized models in concert with our efficient inference engine.

The model running on this tiny computer is capable of recognizing 80 types of objects at several frames a second and only takes up a few megabytes of space. It’s a low-power device so it can run for about 5 hours with onboard battery that has the same capacity as two AA batteries.

To understand what’s happening in the real world, a computer needs to process input from a variety of sources such as imagery, video, and audio. This requires computationally demanding processes to run quickly — all made more challenging when running on limited hardware like our Raspberry Pi Zero.

Taking an image and making an accurate inference that it contains a human, other physical objects, or infer emotion from a face previously required billions of floating point operations…per image. Depending on the usage scenario this may be needed to processed on hundreds of frames per second in real time. Typically these workloads run on graphical processing units (GPU). In contrast, an xnorized network contains binary (0 or 1) values which results in floating point operations such as multiply/add to be converted down into simple operations such as XNOR and POPCOUNT. Additionally, because of the reduced bit-width from 32-bit floating point to 1-bit binary values, the memory requirements of xnorized models reduces significantly. This is where the performance boost of xnorized networks outperform traditional neural networks. What we’re doing here would have needed significant hardware or cloud resources just a short while ago.

While the prototype I’m showing you is an internal demo, we have successfully deployed production-quality models for our customers. We’re currently enabling new capabilities on $2 embedded chips and cameras that are components of everyday consumer appliances, mobile handsets, and home security devices.

I hope this gives you a glimpse into the intriguing opportunities we work on. If you find machine learning and AI as fascinating as I do come join our team!

Nearly forty years ago Paul Allen and Bill Gates set an audacious goal to put a computer on every desk and in every home. Since then we’ve seen our lives change as computers became increasingly available, miniaturizing from expensive mainframes to tremendously powerful handheld smartphones that nearly anyone can access.

I believe we’re on the brink of a similar breakthrough with artificial intelligence, and we are about to witness the next computer revolution. Until now, AI has required vast amounts of computing power to create and run deep learning models, relegating it to research, running in expensive data centers, or controlled by an elite group of cloud computing vendors. Where AI is truly needed is at the edge — cameras, sensors, mobile devices and IoT — where AI can interact with the real world in real-time

Jon Gelsey, Carlo C del Mundo, and Stephanie Wang in Xnor’s office

I’m excited and honored to be joining Xnor as CEO, joining Xnor’s founders — Professor Ali Farhadi and Dr. Mohammad Rastegari — to enable AI on billions of devices such as cameras, phones, wearables, autonomous vehicles and IoT devices that previously wasn’t feasible. Ali and Mohammad’s breakthrough discoveries have dramatically shrunk the compute requirements for advanced AI functions such as computer vision and speech recognition. Xnor is revolutionizing what’s possible on edge devices, delivering sophisticated AI on small and inexpensive devices, e.g. powerful computer vision even on something like a $5 Raspberry Pi Zero. We are already working with companies accomplishing amazing things on autonomous vehicles, home security, and on mobile devices.

Can AI Save Lives?

I’m also incredibly optimistic about the good that AI can bring to the world. Movies and science fiction often paint a dystopian future of how AI can be misused. Instead, I see many possibilities to improve lives — perhaps even save them. One of my friends is an avid sailor and I sometimes worry about what would happen if his boat capsized in a storm. Similar incidents in the recent past innovated by organizing crowdsourcing efforts enlisting people to scour satellite images of oceans spanning thousands of square miles to search for signs of survivors. As noble as these efforts were it was still looking for a needle in a haystack, with human eyes susceptible to fatigue reviewing imagery that quickly became out of date. I envision a future, already possible today, where autonomous search and rescue drones tirelessly traverse large expanses of ocean, equipped with cameras and utilizing deep machine learning to detect human life, boat wreckage, and survival gear in real-time to expedite a rescue.

Imagine drones using ai for search and rescue missions

What else is in the realm of possibility to improve our existence? One of the emerging areas of AI is human emotion detection and behavioral intent to improve retail experiences, utilizing deep learning models that measure consumer intent and engagement through movement and behavior. Those same concepts could be used to alert us to potential terrorist activity, human trafficking, and identify people in distress.

As with most exciting journeys, they’re rarely straight and can take a few surprise turns — but they are always memorable and worth venturing on. I’m looking forward to starting this one.

Learn more in our press release.

About brings highly efficient AI to edge devices such as cameras, cars, drones, wearables and IoT devices. The Xnor platform allows product developers to run complex deep learning algorithms — previously restricted to the cloud — locally, on a wide range of mobile and low-energy devices. Xnor is a venture funded startup, founded on award winning research conducted at the University of Washington and the Allen Institute for Artificial Intelligence. Xnor’s industry-leading technology is used by global corporations in aerospace, automotive, retail, photography, and consumer electronics.