Facebook and its partners in the artificial intelligence (AI) community are building open source tools to accelerate AI development and make the ecosystem more interoperable. Following are the latest updates on these initiatives.
ONNX adds partners
The proliferation of different AI frameworks, hardware, and other technologies has made it difficult for developers to build with tools that work together. Open Neural Network Exchange (ONNX[1]), an open specification for representing deep learning models, is aimed at creating a more interoperable ecosystem. It allows developers to easily move models between state-of-the-art tools so they can choose the best combination for their needs.
ONNX launched in September 2017 as a partnership between Facebook, Amazon Web Services (AWS), and Microsoft. It has grown rapidly with the addition of leading technology companies including AMD, ARM, IBM, Intel, NVIDIA, and Qualcomm, as well as BITMAIN, MediaTek, and Preferred Networks.
In May at its annual F8 developer conference, Facebook announced the availability of several new capabilities[2], including a production-ready CoreML converter, which allows developers to quickly build apps with intelligent new features across Apple products. In addition, Baidu added support for its PaddlePaddle deep learning framework. Six popular deep-learning frameworks now support the ONNX model format.
NVIDIA's TensorRT4 also has a native ONNX parser that provides an easy path to import ONNX models from deep-learning frameworks into TensorRT for optimizing inference on GPUs. These capabilities further bolster updates from AWS, which can serve ONNX models using Model Server for Apache MXNet, and Microsoft's next major update to Windows will allow ONNX models to run natively on hundreds of millions of Windows devices.
More recently, Hewlett Packard Enterprise (HPE) joined ONNX to further open AI standards. Additionally, partners are continuing