NXP Accelerates Edge AI with Kinara Acquisition

  • Published: October 30, 2025
  • Read: 3 min
  • Source:

    Logo NXP Semiconductors

Share:

NXP Kinara edge AI acquisition showcases DNPU-powered on-device AI platform for inference
NXP + Kinara: Discrete NPUs bring low-latency, on-device AI to the edge—from factory floors to smart vehicles. Source: NXP Semiconductors

NXP Semiconductors today announced it has completed the acquisition of Kinara, a pioneer in high-performance, power-efficient Discrete Neural Processing Units (DNPUs). The combination delivers a scalable path to low-latency, privacy-preserving, on-device AI across industrial, automotive, retail, healthcare, and smart-space applications.

“Edge AI has to be local, responsive, efficient, and secure,” said Ali Osman Ors, Director, AI/ML Strategy and Technologies, Edge Processing at NXP. “With Kinara’s discrete NPUs and software integrated into NXP’s portfolio, developers get a full-stack platform to deploy from TinyML to generative AI—without putting the cloud in the critical path.”

Why It Matters

  • Real-time edge inference cuts latency, protects data, reduces bandwidth costs, and improves resilience.

  • The edge AI acceleration market is growing rapidly as developers seek secure, cost-effective performance at the device.

What NXP Gains

  • Scalable AI Acceleration: Kinara’s Ara-1 (~6 eTOPS) for vision workloads and Ara-2 (~40 eTOPS) for advanced large language models (LLMs) and vision-language models (VLMs) let customers scale performance independently of the host MPU.

  • Purpose-Built AI Silicon: Architected specifically for edge neural inference—dataflow execution with dense MAC arrays, tightly coupled on-chip memory, and deterministic scheduling—discrete NPUs (DNPUs) like Ara-1 (~6 eTOPS) and Ara-2 (~40 eTOPS) deliver significantly higher performance-per-watt than general CPUs/GPUs.

  • Modern Model Coverage: Efficient execution of Convolutional Neural Networks (CNNs) and Transformer models—spanning classic vision to multimodal and generative AI.

  • Unified Tooling: Kinara’s SDK, model-optimization tools, and pre-optimized models integrate with NXP’s eIQ® software, giving developers a single build–optimize–deploy flow.

  • System Flexibility: Offload heavy inference to the DNPU while i.MX application processors (e.g., i.MX 8M Plus, i.MX 95) handle pre/post-processing, I/O, UI, safety, and connectivity—optimizing latency and energy.

What It Enables (Examples)

  • Industrial: Real-time visual QA, predictive maintenance, and hazard detection at the edge.

  • Healthcare & Eldercare: On-device multimodal monitoring (voice + vision) with enhanced privacy.

  • Retail & Smart Spaces: Context-aware analytics and generative experiences with sub-second response.

  • Automotive & Transport: In-cabin assistants and perception features without constant network dependency.

For Developers

  • One Toolchain: eIQ® + Kinara SDK for compilation, quantization/optimization, deployment, and profiling.

  • Reference Designs & Models: Faster prototyping across vision, speech, and multimodal AI.

  • Production-Ready Stack: Security, PMICs, connectivity, and analog from NXP’s portfolio—backed by expert support.

About NXP

NXP Semiconductors N.V. enables secure connections and infrastructure for a smarter world, advancing solutions that make lives easier, better, and safer. NXP is driving innovation in embedded applications for the automotive, industrial & IoT, mobile, and communication infrastructure markets.

About Kinara

Kinara designs high-performance, power-efficient discrete NPUs and a comprehensive AI software stack for edge devices, accelerating CNNs and Transformer-based models from vision to generative AI.

Definitions

DNPU = Discrete Neural Processing Unit, a standalone AI accelerator used alongside a host processor. NPU = Neural Processing Unit (general term). CNN = Convolutional Neural Network. LLM = Large Language Model. VLM = Vision-Language Model. eTOPS = “equivalent tera operations per second,” a throughput metric indicating trillions of AI operations per second.


Contact and Company information

Released by
NXP Semiconductors
Contact:
Venis Kalderon Schmölder

Latest News