AusEire
Founding Member.
Common?Or a common pest committed to the cause
I thought I was the only one!
Common?Or a common pest committed to the cause
Hi Bravo,Hey Brain Fam,
Check out this SoundHound announcement from 15 Sep 2022 which describes their new EdgeLite option. It states "With this fully-embedded voice solution companies can process data locally without cloud-related data privacy concerns. Developers have access to natural language commands with less memory and CPU impact, a bundled wake word, and the ability to instantly update commands".
SoundHound has a partnership with Mercedes and they also have a "multi-year agreement with Qualcomm Technologies Inc. to enable SoundHound’s advanced voice AI technology, consisting of its automatic speech recognition, natural language understanding, and text-to-speech conversion software with select Qualcomm Technologies’ Snapdragon® platforms."
View attachment 19892 View attachment 19897
![]()
SoundHound Unveils Full Suite of Edge and Cloud Connectivity Solutions to Boost Accuracy and Privacy in Voice AI
SoundHound AI, Inc. (Nasdaq: SOUN) (“SoundHound”), a global leader in voice artificial intelligence, today announced the availability of a whole suite of Edg...www.businesswire.com![]()
SoundHound Inc. and Qualcomm Technologies Inc. Announce Strategic Agreement to Bring SoundHound’s Voice AI Technology to Snapdragon Platforms
SoundHound Inc., a global leader in voice artificial intelligence (AI), currently in the process of becoming a public company through its expected merger wit...www.businesswire.com
freaky shit going on. first my car radio actually shows blu wireless and i can't hit play as its possesed then i walk into chemist isle and see yr C4 explosionIf you reverse the 4C to C4
C4 = Explosion![]()
I found this interview quite outstanding and could not stop watching it. How bloody informative was it. Being a shareholder, it has instilled a high level of confidence in me of what Brainchip has developed. Clearly they are the industry leaders and it is only a matter of time before the dam walls burst. As I reflect on all the dot joining how can I not be excited by the prospects going forward. So glad to be a part of this journey and appreciative of the sharing of knowledge and information so freely given on this forum. Beers and Cheers to all.So glad Jerome is on our team. His enthusiasm is infectious.
"he oiled his way across the floor
oozing charm from every pore"
I'm pretty sure he went on to an illustrious career on "Thunderbirds".A couple of comments on the video:
*impressive level of enthusiasm (Peter's first Akida AGM talk would give him a run for his money - though a great deal less awkward).
*students, just smile and don't look fightened
*I have traumatic memories of an unsupervised Van der Graaf generator (and pencil sharpener generator) All boys school
*is it just me or is that a confusing accent
*is that the guy in the chocolate add (or am I thinking of toothpaste?) TV has cooked my brain
Hi Bravo,
Not sure that that hound dog has got with the programme yet:
https://www.bing.com/videos/search?q=life+gets+teejus+don't+it&view=detail&mid=A80E92E988B66B2CB6A1A80E92E988B66B2CB6A1&FORM=VIRE
"Hound dog howlin' all forlorn
Laziest dog that ever was born.
He's howlin' 'cause he's sittin' on a thorn -
just too tired to move over."
This is a patent from November 2020. it uses NN models to generate a voice response, but it uses a transformer (26) to do the speech recognition. Funnily enuf, we were discussing transformers earlier - in a derogatory way.
But, if they wanted to improve it to
"Hey Mecedes!" standard ...
US2022165257A1 NEURAL SENTENCE GENERATOR FOR VIRTUAL ASSISTANTS
View attachment 19905
[0002] The present subject matter is in the field of artificial intelligence systems and Automatic Speech Recognition (ASR). More particularly, embodiments of the present subject matter relate to methods and systems for neural sentence generation models.
[0054] Instead of creating these unlimited ways of utterance sentences by experienced developers, the present subject matter can employ neural network models and machine learning to automate the generation of numerous, thorough, and effective sample utterance sentences to invoke one intent. Generated by finetuned natural language generators and trained classifiers, these sample utterance sentences can have a semantic meaning to invoke the specific intent they were created for.
[0080] According to some embodiments, the system can train the classifier model 26 to predict the probability of a generated utterance sentence being correct by finetuning from a pre-trained NLG model such as a transformer. Various transformer models such as XLNET, BART, BERT, or ROBERTA, and their distilled versions can provide sufficient accuracy and acceptable training and inference-time performance for different datasets and applications.
https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)
A transformer is a deep learning model that adopts the mechanism of self-attention, differentially weighting the significance of each part of the input data. It is used primarily in the fields of natural language processing (NLP)[1] and computer vision (CV).[2]
Like recurrent neural networks (RNNs), transformers are designed to process sequential input data, such as natural language, with applications towards tasks such as translation and text summarization. However, unlike RNNs, transformers process the entire input all at once. The attention mechanism provides context for any position in the input sequence. For example, if the input data is a natural language sentence, the transformer does not have to process one word at a time. This allows for more parallelization than RNNs and therefore reduces training times.[1]
Transformers were introduced in 2017 by a team at Google Brain[1] and are increasingly the model of choice for NLP problems,[3] replacing RNN models such as long short-term memory (LSTM). The additional training parallelization allows training on larger datasets. This led to the development of pretrained systems such as BERT (Bidirectional Encoder Representations from Transformers) and GPT (Generative Pre-trained Transformer), which were trained with large language datasets, such as the Wikipedia Corpus and Common Crawl, and can be fine-tuned for specific tasks.[4][5]
That's a trade mark.Here's something pending, filed recently for SoundHound's EdgeLite.
View attachment 19915
Do you have the link to the talk I must of missed it work has been crazy the Christmas rush has started alreadyHi... A week or so ago I emailed both Peter and Anil together.
I said to Anil that I'd personally like to meet him face-to-face, so to save me having to fly to San Francisco, I politely planted a seed, that being,
it would be great for both Peter and Anil to attend next year's AGM together, assuring them that they would be very well received together, by the loyal Australian shareholder base.
I don't know if any shareholders have had the opportunity to meet with Anil, but I'd consider it a real privilege, as with Peter.
Whatever our 4C does or doesn't deliver this week, there's more to come before this year is out...listen very carefully to Nikunj during his presentation the other day with Edge Impulse, he seemed to give a hint about something coming at one point, in my own opinion of course.
Love Brainchip x
Do you have the link to the talk I must of missed it work has been crazy the Christmas rush has started already
Instead of talking in riddles, how about like everyone else on TSEx, you simply share what you actually think or know. If Nikunj gave a hint, why not just say what you think.Hi... A week or so ago I emailed both Peter and Anil together.
I said to Anil that I'd personally like to meet him face-to-face, so to save me having to fly to San Francisco, I politely planted a seed, that being,
it would be great for both Peter and Anil to attend next year's AGM together, assuring them that they would be very well received together, by the loyal Australian shareholder base.
I don't know if any shareholders have had the opportunity to meet with Anil, but I'd consider it a real privilege, as with Peter.
Whatever our 4C does or doesn't deliver this week, there's more to come before this year is out...listen very carefully to Nikunj during his presentation the other day with Edge Impulse, he seemed to give a hint about something coming at one point, in my own opinion of course.
Love Brainchip x
Hey Mercedes: very powerful voice assistant The "Hey Mercedes" voice assistant is highly capable of dialogue and learning by activating online services in the Mercedes me App. Moreover, certain actions can be performed even without the activation keyword "Hey Mercedes". These include taking a telephone call. "Hey Mercedes" also explains vehicle functions, and, for example, can help when asked how to connect a smartphone via Bluetooth or where the first-aid kit can be found. If compatible home technology and household devices are present, they can also be networked with the vehicle thanks to the smart home function and controlled from the vehicle by voice.
Great to hear from you Motty.... Yep very exciting times ahead! and a very enthusiastic Jerome .. I love his comments about there will be a point where having the convenience of embedded devices that will allow you to go through a turnstile or having to pay without having to tap, and fundamental identification and authentication will happen "Automagically" love that description ... a great interview.I found this interview quite outstanding and could not stop watching it. How bloody informative was it. Being a shareholder, it has instilled a high level of confidence in me of what Brainchip has developed. Clearly they are the industry leaders and it is only a matter of time before the dam walls burst. As I reflect on all the dot joining how can I not be excited by the prospects going forward. So glad to be a part of this journey and appreciative of the sharing of knowledge and information so freely given on this forum. Beers and Cheers to all.