Qualcomm has their own AI, Hexagon, in Snapdragon 8.2.Prophesee has their own Neuromorphic ip, although i believe ours is superior hense why they partnered with us.......
So it is possible it excludes us without including anyone else........but i like our chances
Prophesee has also never been mentioned by the company as a competitor either..........
That is my understanding
Snapdragon 8 Gen 2 deep dive: Everything you need to know (androidauthority.com)

Snapdragon 8 Gen 2 deep dive: Everything you need to know
All you'll ever need to know about Qualcomm's latest Snapdragon 8 Gen 2 mobile platform for next-gen phones can be found here.
www.androidauthority.com
To boost performance, the Tensor accelerator inside the DSP has doubled in size for twice the performance and has new optimizations specifically for language processing. Qualcomm is also debuting what it calls micro tile inference support, essentially chopping up imaging and other problems into smaller tiles to save on memory at the expense of some result accuracy. Along those lines, the addition of INT4 also means that developers can now implement machine learning problems requiring high bandwidth at the expense of some accuracy if compressing a larger model. Qualcomm is providing tools to partners to help support INT4, so it will require a retooling of existing applications to work.
Altogether, the Snapdragon 8 Gen 2 Hexagon DSP offers 4.35x the performance of its predecessor, depending on the ML model. (In this case, Qualcomm is comparing mobileBERT natural language processing). Sounds impressive, but I think the more significant change is the introduction of Hexagon Direct Link, which more closely connects its ISP to the AI Engine. The company dubs this its “Cognitive ISP.”
...
Qualcomm doubled the physical link between the image signal processor (ISP), Hexagon DSP, and Adreno GPU, driving higher bandwidth and lowering latency. This allows the Snapdragon 8 Gen 2 to run much more powerful machine-learning tasks on imaging data right off the camera sensor. RAW data, for instance, can be passed directly to the DSP/AI Engine for imaging workloads, or Qualcomm can use the link to upscale low-res gaming scenarios to assist with GPU load balancing.
US2020104690A1 NEURAL PROCESSING UNIT (NPU) DIRECT MEMORY ACCESS (NDMA) HARDWARE PRE-PROCESSING AND POST-PROCESSING 20180928
[0015] FIG. 4 is a block diagram illustrating an exemplary software architecture that may modularize artificial intelligence (AI) functions, in accordance with aspects of the present disclosure.
...
In the exemplary example, the deep neural network may be configured to run on a combination of processing blocks, such as the CPU 422 , the DSP 424 , and the GPU 426 , or may be run on the NPU 428 .
...
. In aspects of the present disclosure, the read client RCLT and the write client WCLT may refer to an array to compute elements of the NPU 700 , which may support, for example, 16-NDMA read channels and 16-NDMA write channels for the various compute units of the NPU 700 .
[0021] FIG. 7 is a block diagram illustrating a neural processing unit (NPU), including an NPU direct memory access (NDMA) core and interfaces configured to provide hardware pre-processing and post-processing of NDMA data, according to aspects of the present disclosure.