Deadpool
Did someone say KFC
How about some of the huge transactions on the market so far.
Is that distant drums I hear?
How about some of the huge transactions on the market so far.
Is that distant drums I hear?
Yeah, they appear to have lumped all chips listed in the same basket. Akida only needs an external pre processor to process the initial conditions, i.e. the trained weights and other configuration. Then all processing is done within the Akida Neural Fabric.“For example, terrestrial neuromorphic processors such as Intels Cooporation's LoihiTM, Brainchip's AkidaTM, and Google Inc's Tensor Processing Unit (TPUTM) require full host processors for integration for their software development kit (SDK) that are power hungry or limit throughput. This by itself is inhibiting the use of neuromorphic processors for low SWaP space missions.”
Am I reading this wrong? Are they saying that Akida needs a full host processor for integration for their SDK? I thought one of the main selling points of Akida is that it doesn’t need a host processor and can do all the computation alone?
Sounds promising!FEBRUARY 2, 2023
Next-gen Siri: The future of personal assistant AIWhat advancements and features could make Siri a more powerful personal assistant in the future?
![]()
With the rapid advancements in artificial intelligence, it’s no surprise that many users are looking forward to what the next generation of Siri personal assistance will have to offer. From improved emotional recognition to autonomous management, the possibilities are endless. But what exactly are people looking for in the next-gen Siri?
One of the most requested features is improved contextual language capabilities. At the moment, a lack of such capabilities can make it difficult for some users to have a smooth conversation with their assistants. By incorporating more advanced voice recognition technologies, Siri could better understand contexts and intentions.
Another highly requested feature is the ability to multitask. Currently, Siri can only handle one task at a time, which can be frustrating for users who want to accomplish multiple things at once. The incorporation of multitasking could enable the assistant to handle complex requests simultaneously, thereby improving efficiency.
Contextual language
Many users are asking for Siri to have improved natural language processing capabilities. This would allow for more seamless conversations with the AI, as it would be able to understand more complex and nuanced requests. This would also make it easier for users to ask for specific information, as Siri would be able to understand more context.
![]()
Machine learning (ML)
Users have expressed a wish for Siri to become more proactive. With the increasing popularity of smart home devices, users also request that Siri integrates with daily habits and routines, improving awareness of the different spaces and rooms using machine learning (ML).
This could include autonomous actions like sending reminders, providing updates, asking for deliveries in perfect timing, optimizing e-vehicle charging, watering gardens only when needed, and even making suggestions based on the user’s behavior, or external data like weather forecasts and traffic conditions. This would make Siri a more helpful personal assistant that could anticipate needs, making the home itself more proactive.
![]()
Top breakthroughs in AI: what to expect
- Natural Language Processing (NLP): the ability to understand and interpret human language, allowing for more accurate and natural dialogue;
- Emotion detection: the ability to detect and respond to human emotions, allowing for more personalized and empathetic interactions;
- Machine learning (ML): a method of teaching AI through data and experience, allowing it to remember, adapt, and improve over time;
- Contextual understanding: the ability to understand and respond to the context of a conversation or request, providing more accurate and relevant results, answers, and actions;
- Explainable AI: the ability to analyze complex data and scenarios, providing clear explanations and the best options for decision-making processes, increasing transparency and trust;
- Autonomous awareness: the ability to connect and control multiple devices directly, creating a seamless awareness environment;
- Predictive analytics: in the future, Siri will be able to analyze data and predict future events, allowing for proactive problem-solving over the “Internet of Things” (IoT) without human interference;
- Computer vision: the ability to interpret and understand visual data, such as images or video, to improve image recognition and object detection, acting accordingly;
- Autonomous services: the integration with robotics, or automated systems (drone delivery, lawn mowing, vacuum cleaning, pool maintenance, etc) and third-party services to improve the home’s efficiency.
![]()
The next generation of Siri has the potential to revolutionize the way we interact with AI with advancements in integration capabilities. Siri could definitively become part of the family.
Stay tuned to AppleMagazine for more updates in relation to the latest advancements in personal assistants and artificial intelligence.
![]()
Next-gen Siri: The Future Of Personal Assistant AI - AppleMagazine
With the rapid advancements in artificial intelligence, it's no surprise that many users are looking forward to what the next generation of Siri personalapplemagazine.com
You would hope that BRN management have been banging down their door showing them what akida could do to improve Siri.FEBRUARY 2, 2023
Next-gen Siri: The future of personal assistant AIWhat advancements and features could make Siri a more powerful personal assistant in the future?
![]()
With the rapid advancements in artificial intelligence, it’s no surprise that many users are looking forward to what the next generation of Siri personal assistance will have to offer. From improved emotional recognition to autonomous management, the possibilities are endless. But what exactly are people looking for in the next-gen Siri?
One of the most requested features is improved contextual language capabilities. At the moment, a lack of such capabilities can make it difficult for some users to have a smooth conversation with their assistants. By incorporating more advanced voice recognition technologies, Siri could better understand contexts and intentions.
Another highly requested feature is the ability to multitask. Currently, Siri can only handle one task at a time, which can be frustrating for users who want to accomplish multiple things at once. The incorporation of multitasking could enable the assistant to handle complex requests simultaneously, thereby improving efficiency.
Contextual language
Many users are asking for Siri to have improved natural language processing capabilities. This would allow for more seamless conversations with the AI, as it would be able to understand more complex and nuanced requests. This would also make it easier for users to ask for specific information, as Siri would be able to understand more context.
![]()
Machine learning (ML)
Users have expressed a wish for Siri to become more proactive. With the increasing popularity of smart home devices, users also request that Siri integrates with daily habits and routines, improving awareness of the different spaces and rooms using machine learning (ML).
This could include autonomous actions like sending reminders, providing updates, asking for deliveries in perfect timing, optimizing e-vehicle charging, watering gardens only when needed, and even making suggestions based on the user’s behavior, or external data like weather forecasts and traffic conditions. This would make Siri a more helpful personal assistant that could anticipate needs, making the home itself more proactive.
![]()
Top breakthroughs in AI: what to expect
- Natural Language Processing (NLP): the ability to understand and interpret human language, allowing for more accurate and natural dialogue;
- Emotion detection: the ability to detect and respond to human emotions, allowing for more personalized and empathetic interactions;
- Machine learning (ML): a method of teaching AI through data and experience, allowing it to remember, adapt, and improve over time;
- Contextual understanding: the ability to understand and respond to the context of a conversation or request, providing more accurate and relevant results, answers, and actions;
- Explainable AI: the ability to analyze complex data and scenarios, providing clear explanations and the best options for decision-making processes, increasing transparency and trust;
- Autonomous awareness: the ability to connect and control multiple devices directly, creating a seamless awareness environment;
- Predictive analytics: in the future, Siri will be able to analyze data and predict future events, allowing for proactive problem-solving over the “Internet of Things” (IoT) without human interference;
- Computer vision: the ability to interpret and understand visual data, such as images or video, to improve image recognition and object detection, acting accordingly;
- Autonomous services: the integration with robotics, or automated systems (drone delivery, lawn mowing, vacuum cleaning, pool maintenance, etc) and third-party services to improve the home’s efficiency.
![]()
The next generation of Siri has the potential to revolutionize the way we interact with AI with advancements in integration capabilities. Siri could definitively become part of the family.
Stay tuned to AppleMagazine for more updates in relation to the latest advancements in personal assistants and artificial intelligence.
![]()
Next-gen Siri: The Future Of Personal Assistant AI - AppleMagazine
With the rapid advancements in artificial intelligence, it's no surprise that many users are looking forward to what the next generation of Siri personalapplemagazine.com
Sounds promising!
I'll switch from Google to Apple if they implement Akida. Unless Google decides they need us too.
Almost reads like NASA had trouble comprehending the autonomous nature of the Akida NN in performing inference and ML.Yeah, they appear to have lumped all chips listed in the same basket. Akida only needs an external pre processor to process the initial conditions, i.e. the trained weights and other configuration. Then all processing is done within the Akida Neural Fabric.
It
Almost reads like NASA had trouble comprehending the autonomous nature of the Akida NN in performing inference and ML.
However, when Akida is used with a ground-up* redesign as a NN accelerator for a full-on AI enhanced CPU/GPU SoC, the ARM Cortex is superfluous and would take up precious silicon real estate because the CPU/GPU could do the configuration of Akida.
They would also strip out some of the comms interfaces to save real estate.
In a newly designed system specifically adapted for SNNs, they could also drop the CNN2SNN circuitry.
* "ground-up" as in "starting from the bottom" - not "pulverised".
Time to Market. Crap, an uppredictable element. That is one thing frustrating Neural Nuts like us.You would hope that BRN management have been banging down their door showing them what akida could do to improve Siri.
I guess we will never know if management have been proactive in this respect due to the NDAs
Energy! 4-bits!!! (Darn, slow-ass Time to Market!!!)Agreed AG, especially when you think about how today's Natural Language Processing models consume massive amounts of energy. Amongst one of the many benefits of neuromorphic computing is it's ability to perform complex computations with less energy consumption which is perfect for embedded systems where real-time processing is required for features like speech recognition. What's the point of having sophisticated NLP in a mobile phone if it drains your battery after an hour of usage?
SoundHound video 2 days ago.
Responds faster, without awkward pauses, performing speech recognition on the device without any voice data being sent to the cloud to enhance privacy, multi-modal, learns from each interaction to make personalised suggestions, etc, etc...
Seriously, who else is doing this, if it isn't us?
View attachment 29399
SoundHound video 2 days ago.
Responds faster, without awkward pauses, performing speech recognition on the device without any voice data being sent to the cloud to enhance privacy, multi-modal, learns from each interaction to make personalised suggestions, etc, etc...
Seriously, who else is doing this, if it isn't us?
View attachment 29399
Prophesee with Brainchip boosting 200 days ago? A history of boosting together is interesting.Prophesee said in a LinkedIn exchange with @chapman89 that they are still in the early stages with Brainchip and do not yet have any sort of commercial agreement with us.
Snapdragon 8 is already available commercially so it can't be us right? Although it does sound like Akida is involved?? Some clarity around this would be great. If we were being used, there would surely have been some sort of announcement.