BRN Discussion Ongoing

Evening DingoBorat ,

Yep , one would think thay .....our directors , , would be utilising our latest tech... would be very strange if thay , directors thought fit to incorperate AKIDA 1000
, when shareholders have just paid for AKIDA 1500 to be fabricated & produced ( Global Foundries )...several MILLION $ ,
Nevermind AKIDA 2, in three flavours.

Think the expression ... time will tell... has well and truely passed.


TIME TO DELIVER.

Regards ,
Esq.
Some might be drinking.. Sorry 😔.. I mean thinking, why is the Company using these "old" AKD1000 chips in the new VVDN Edge Box?..

The fact is, AKIDA 1.0 IP and AKD1000 chips, have not been made obsolete, redundant, or outdated, compared to AKIDA 2.0 IP.

They are still very much current technology and still way in front of the competition, in neuromorphic hardware.

There is also the fact, that in this stage of the Company's journey, we have to be prudent and make the most of the Company's resources, in all areas.

As Sean has stated, this year is crucial for BrainChip and while it would be nice, to pump out some AKD2000 chips and put the latest development of the technology in the Edge Boxes..

The Company simply does not have the funds to throw at that..


I feel strongly, that with our new high calibre hires on board (especially in sales, as that's what we really need) that this will be a very exciting year to come.

Good Luck to the Company and All Holders.

You need much more than luck in Life to succeed, but a little luck goes a long way 😉
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 41 users
Looks like Uni Cote D'Azur exploring neuromorphic vision now.

This new role mentions basing some of this new project on possible previous work with SNN and I wonder if that relates to the other project I posted (below link) that specifically mentioned utilising Akida.

Will also be based at the same location as the prev project.

:unsure:






Université Côte d'Azur

Doctoral Student​

Université Côte d'Azur Greater Nice Metropolitan Area
5 days ago 28 applicants

See who Université Côte d'Azur has hired for this role

ApplySave

Direct message the job poster from Université Côte d'Azur
Jean Martinet

Context
These multiple proposals take place in the context of an international collaborative project co-funded by the French ANR and the Swiss NSF. The project NAMED (Neuromorphic Attention Models for Event Data) that will start on February 1st, 2024.

The field of embedded computer vision has become increasingly important in recent years as the demand for low-latency and energy-efficient vision systems has grown [Thiruvathukal et al. 2022]. One of the key challenges in this area is developing intelligent vision systems that can efficiently process large amounts of visual data while still maintaining high accuracy and reliability.

The biological retina has inspired the development of a new kind of camera [Lichtsteiner et al. 2008]: event-based sensors asynchronously measure per-pixel brightness changes and output a stream of events that encode the time, location, and sign of the brightness changes (positive or negative). In addition to eliminating redundancy, they benefit from several advantages over conventional frame cameras, from which they fundamentally differ. Event sensors are inspired from the human eye, that is primarily sensitive to changes in the luminance falling on its individual sensors. These changes are processed by layers of neurons in the retina through to the retinal ganglion cells that generate action potentials, or spikes, whenever a significant change is detected. Then these spikes propagate through the optic nerve to the brain.

Cognitive attention mechanisms, inspired by the human brain's ability to selectively focus on relevant information, can offer significant benefits in embedded computer vision systems. The human eye has a small high-resolution region (the fovea) in the center of the field of vision, and a much larger peripheral vision, which has much lower resolution, combined with an increased sensitivity to movement. Therefore, limited resources are deployed to extract the most salient information from the scene without wasting energy capturing the entire scene at the highest resolution. This foveation mechanism has inspired the recent development of a variable-resolution event sensor [Serrano-Gotarredona et al., 2022]. This sensor has an electronic control of the resolution in selected regions of interest, allowing to focus downstream computational resources on specific areas of the image that convey the most useful information. This sensor even goes beyond biology by allowing multiple regions of interest.


Scientific objectives
The general objective of this research project (with specific tasks for internship, PhD, postdoc) is to design and implement computer vision attention models adapted to event data. A first step will consist in studying state-of-the-art attentional mechanisms in deep networks and their link with cognitive attention as implemented in the brain. Cognitive attention refers to the selective processing of sensory information by the brain based on its relevance and importance to the current task or goal. It involves the ability to focus one's attention on specific aspects of the environment while filtering out irrelevant or distracting information. In particular, the study will distinguish between both top-down and bottom-up attention. The second step will be the design an attention architecture that will allow selectively focusing on relevant regions while ignoring irrelevant part, which will depend on the target task (e.g., segmentation, object tracking, obstacle avoidance, etc.). The model will be based either on standard deep networks, or on spiking neural networks, based on previous work [GIT]. Spiking Neural Networks are a special class of artificial neural networks, where neurons communicate by sequences of asynchronous spikes. Therefore, they are a natural match for event-based cameras due to their asynchronous operation principle. This selection of regions will result in less data usage and smaller models (frugal system). In the third step, we will evaluate the impact of the attention mechanism on the general performance of the computer vision system. The target metrics will obviously depend on the selected task, and will include accuracy, MIOU, complexity, training time, inference time, etc. of the system.


Job information
Location

Université Côte d’Azur, Sophia Antipolis (Nice area) France

Types of contracts
Internship: duration 4-6 months / PhD: duration 36 months / Postdoc: duration 18 months

Job status
Full-time for all

Candidates’ profiles
Master 2 / PhD in Computer Science (Machine Learning, Computer Vision, AI) or Mathematics or Computational Neuroscience. Programming skills in Python/C++, interest in research, machine learning, bio-inspiration and neurosciences are required.

Salary
Standard French internship allowance / PhD salary / Postdoc (research engineer) salary by CNRS

Offer starting date
Internship: Around March 2024 / PhD: Flexible around October 2024 / Postdoc: Flexible from October 2024
(PhD opportunity after the internship)

Application period
From December 2023 to the offer starting date
 
  • Like
  • Fire
  • Love
Reactions: 18 users

Perhaps

Regular
AKIDA 2.0 is available, but there hasn't been an announcement of a physical AKD2000 reference chip being produced..

Not sure exactly when the AKD1000 chips were originally made, but there was the engineering samples and then the reference chips, which were greatly improved.

Must be getting close to two years though and they are going into Edge Boxes now and will be State of the Art!

To do that today, in these times of rapid technological development, just goes to show, how far AKIDA is ahead of the curve.

They could also use AKD1500 chips, if they needed to?..

We really have no idea, how many AKD1000 chips were produced, or what percentage of the run were "good".

There's always a non viable percentage and we don't know what the device yield is, when producing AKIDA chips, but I'm guessing, with the engineering samples coming out practically perfect first time (a huge credit to Anil Mankar, as was stated by Louis DiNardo, who had rarely seen that) that it is pretty high, by industry standards.
For a commercial production it needs a tape out of the chip first. As of today there exists no silicon wafer for production of AKD 2.0. The tape out of a foundry is a long process, lasting several months and the costs are up to Brainchip. Usual price for a tape out is in the range US$ 4-5 mio. AKD1000 and 1500 are production ready, AKD 2.0 not. Hope the ann. of a tape out will come soon, but maybe the cash reserve isn't good enough to do it now.
Lasting quarters covered by the cash balance in financial reports are based on fixed costs for employees, buildings and so on. Extra costs like a tape out are not part of this calculation. Looks like additional funding is needed first.
 
Last edited:
  • Like
  • Thinking
Reactions: 6 users

Diogenese

Top 20
Looks like Uni Cote D'Azur exploring neuromorphic vision now.

This new role mentions basing some of this new project on possible previous work with SNN and I wonder if that relates to the other project I posted (below link) that specifically mentioned utilising Akida.

Will also be based at the same location as the prev project.

:unsure:






Université Côte d'Azur'Azur

Doctoral Student​

Université Côte d'Azur Greater Nice Metropolitan Area​

5 days ago 28 applicants​

See who Université Côte d'Azur has hired for this role

ApplySave

Direct message the job poster from Université Côte d'Azur
Jean Martinet
Context
These multiple proposals take place in the context of an international collaborative project co-funded by the French ANR and the Swiss NSF. The project NAMED (Neuromorphic Attention Models for Event Data) that will start on February 1st, 2024.

The field of embedded computer vision has become increasingly important in recent years as the demand for low-latency and energy-efficient vision systems has grown [Thiruvathukal et al. 2022]. One of the key challenges in this area is developing intelligent vision systems that can efficiently process large amounts of visual data while still maintaining high accuracy and reliability.

The biological retina has inspired the development of a new kind of camera [Lichtsteiner et al. 2008]: event-based sensors asynchronously measure per-pixel brightness changes and output a stream of events that encode the time, location, and sign of the brightness changes (positive or negative). In addition to eliminating redundancy, they benefit from several advantages over conventional frame cameras, from which they fundamentally differ. Event sensors are inspired from the human eye, that is primarily sensitive to changes in the luminance falling on its individual sensors. These changes are processed by layers of neurons in the retina through to the retinal ganglion cells that generate action potentials, or spikes, whenever a significant change is detected. Then these spikes propagate through the optic nerve to the brain.

Cognitive attention mechanisms, inspired by the human brain's ability to selectively focus on relevant information, can offer significant benefits in embedded computer vision systems. The human eye has a small high-resolution region (the fovea) in the center of the field of vision, and a much larger peripheral vision, which has much lower resolution, combined with an increased sensitivity to movement. Therefore, limited resources are deployed to extract the most salient information from the scene without wasting energy capturing the entire scene at the highest resolution. This foveation mechanism has inspired the recent development of a variable-resolution event sensor [Serrano-Gotarredona et al., 2022]. This sensor has an electronic control of the resolution in selected regions of interest, allowing to focus downstream computational resources on specific areas of the image that convey the most useful information. This sensor even goes beyond biology by allowing multiple regions of interest.


Scientific objectives
The general objective of this research project (with specific tasks for internship, PhD, postdoc) is to design and implement computer vision attention models adapted to event data. A first step will consist in studying state-of-the-art attentional mechanisms in deep networks and their link with cognitive attention as implemented in the brain. Cognitive attention refers to the selective processing of sensory information by the brain based on its relevance and importance to the current task or goal. It involves the ability to focus one's attention on specific aspects of the environment while filtering out irrelevant or distracting information. In particular, the study will distinguish between both top-down and bottom-up attention. The second step will be the design an attention architecture that will allow selectively focusing on relevant regions while ignoring irrelevant part, which will depend on the target task (e.g., segmentation, object tracking, obstacle avoidance, etc.). The model will be based either on standard deep networks, or on spiking neural networks, based on previous work [GIT]. Spiking Neural Networks are a special class of artificial neural networks, where neurons communicate by sequences of asynchronous spikes. Therefore, they are a natural match for event-based cameras due to their asynchronous operation principle. This selection of regions will result in less data usage and smaller models (frugal system). In the third step, we will evaluate the impact of the attention mechanism on the general performance of the computer vision system. The target metrics will obviously depend on the selected task, and will include accuracy, MIOU, complexity, training time, inference time, etc. of the system.


Job information
Location

Université Côte d’Azur, Sophia Antipolis (Nice area) France

Types of contracts
Internship: duration 4-6 months / PhD: duration 36 months / Postdoc: duration 18 months

Job status
Full-time for all

Candidates’ profiles
Master 2 / PhD in Computer Science (Machine Learning, Computer Vision, AI) or Mathematics or Computational Neuroscience. Programming skills in Python/C++, interest in research, machine learning, bio-inspiration and neurosciences are required.

Salary
Standard French internship allowance / PhD salary / Postdoc (research engineer) salary by CNRS

Offer starting date
Internship: Around March 2024 / PhD: Flexible around October 2024 / Postdoc: Flexible from October 2024
(PhD opportunity after the internship)

Application period
From December 2023 to the offer starting date


The human eye has a small high-resolution region (the fovea) in the center of the field of vision, and a much larger peripheral vision, which has much lower resolution, combined with an increased sensitivity to movement. Therefore, limited resources are deployed to extract the most salient information from the scene without wasting energy capturing the entire scene at the highest resolution. This foveation mechanism has inspired the recent development of a variable-resolution event sensor [Serrano-Gotarredona et al., 2022]. This sensor has an electronic control of the resolution in selected regions of interest, allowing to focus downstream computational resources on specific areas of the image that convey the most useful information. This sensor even goes beyond biology by allowing multiple regions of interest.


Interestingly Luminar, who have displaced Valeo on Mercedes Christmas card list, and a couple of other lidar makers, have a foveated lidar system, which increases the laser point density in areas of interest. Apparently, this gives Laminar a greater range than Valeo.


WO2023057666A1 ELECTRONICALLY FOVEATED DYNAMIC VISION SENSOR 20211005

Applicants CONSEJO SUPERIOR INVESTIGACION [ES]​

Inventors LINARES BARRANCO BERNABÉ [ES]; SERRANO GOTARREDONAMARÍA TERESA [ES]




1702821332526.png


The present invention relates to a vision sensor comprising a matrix (1) of pixels (5) on which a foveation mechanism is used, defining a series of low resolution regions of grouped pixels (macro-pixels) such that they operate as a single isolated pixel (5), information being obtained from the groups of pixels (5) and not from each pixel (5) individually. Due to the low resolution regions of macro-pixels, energy and data bandwidth savings are achieved in favour of the high resolution regions that are not grouped or foveated. The regions of grouped pixels can be configured with external electronic signals. In addition, multiple high resolution or foveation regions as well as region sizes can be electronically activated.

As it is known that vphi, voi, vob are proportional to log , so any of these voltages is proportional to the logarithm of the photocurrent. Defining a generic tension ViOgQvbi) such that z¡7¡O5íaío^(/pftf.níj£). and considering the added voltagethen:

The effect of the sum of the voltages of the individual pixels (5) is equivalent to the multiplication of their photocurrents. The pixel group (5) will be sensitive to the relative variation of the product of all the photocurrents in the pixel group (5). The sensitivity of the group is proportional to the total number of pixels (5) in the group, so the event frequency generation would be multiplied by the number of pixels (5) for the same relative variation of the photocurrent and the width of the group. The output band consumed by the group in low resolution mode will be the same as that of all (5) individual pixels in high resolution mode. A simple and practical circuit to add the voltages of the pixels (5) would be to interconnect the floating node of the capacitors C1/C2 of the pixels (5) of the group
.

True to the European roots, they use analog.
 
  • Like
  • Thinking
  • Fire
Reactions: 13 users
The human eye has a small high-resolution region (the fovea) in the center of the field of vision, and a much larger peripheral vision, which has much lower resolution, combined with an increased sensitivity to movement. Therefore, limited resources are deployed to extract the most salient information from the scene without wasting energy capturing the entire scene at the highest resolution. This foveation mechanism has inspired the recent development of a variable-resolution event sensor [Serrano-Gotarredona et al., 2022]. This sensor has an electronic control of the resolution in selected regions of interest, allowing to focus downstream computational resources on specific areas of the image that convey the most useful information. This sensor even goes beyond biology by allowing multiple regions of interest.


Interestingly Luminar, who have displaced Valeo on Mercedes Christmas card list, and a couple of other lidar makers, have a foveated lidar system, which increases the laser point density in areas of interest. Apparently, this gives Laminar a greater range than Valeo.


WO2023057666A1 ELECTRONICALLY FOVEATED DYNAMIC VISION SENSOR 20211005

Applicants CONSEJO SUPERIOR INVESTIGACION [ES]​

Inventors LINARES BARRANCO BERNABÉ [ES]; SERRANO GOTARREDONAMARÍA TERESA [ES]​




View attachment 52173

The present invention relates to a vision sensor comprising a matrix (1) of pixels (5) on which a foveation mechanism is used, defining a series of low resolution regions of grouped pixels (macro-pixels) such that they operate as a single isolated pixel (5), information being obtained from the groups of pixels (5) and not from each pixel (5) individually. Due to the low resolution regions of macro-pixels, energy and data bandwidth savings are achieved in favour of the high resolution regions that are not grouped or foveated. The regions of grouped pixels can be configured with external electronic signals. In addition, multiple high resolution or foveation regions as well as region sizes can be electronically activated.

As it is known that vphi, voi, vob are proportional to log , so any of these voltages is proportional to the logarithm of the photocurrent. Defining a generic tension ViOgQvbi) such that z¡7¡O5íaío^(/pftf.níj£). and considering the added voltagethen:

The effect of the sum of the voltages of the individual pixels (5) is equivalent to the multiplication of their photocurrents. The pixel group (5) will be sensitive to the relative variation of the product of all the photocurrents in the pixel group (5). The sensitivity of the group is proportional to the total number of pixels (5) in the group, so the event frequency generation would be multiplied by the number of pixels (5) for the same relative variation of the photocurrent and the width of the group. The output band consumed by the group in low resolution mode will be the same as that of all (5) individual pixels in high resolution mode. A simple and practical circuit to add the voltages of the pixels (5) would be to interconnect the floating node of the capacitors C1/C2 of the pixels (5) of the group
.

True to the European roots, they use analog.
Some other info on the originally spotted project using Akida.

The project is backed by Renault so they would be aware of its progress and subsequently Akida.


Also from a Renault 2020 report.

Screenshot_2023-12-17-22-57-20-33_4641ebc0df1485bf6b47ebd018b5ee76.jpg



 
  • Like
  • Fire
  • Thinking
Reactions: 9 users
Some other info on the originally spotted project using Akida.

The project is backed by Renault so they would be aware of its progress and subsequently Akida.


Also from a Renault 2020 report.

View attachment 52174


Would appear Prophesee are also involved and the project runs till 2024.



Screenshot_2023-12-17-23-03-53-62_4641ebc0df1485bf6b47ebd018b5ee76.jpg
 
  • Like
  • Fire
Reactions: 17 users

Diogenese

Top 20
Would appear Prophesee are also involved and the project runs till 2024.



View attachment 52175
So the patent for the foveated DVS belongs to the Spanish research organization

CONSEJO SUPERIOR INVESTIGACION​

This is the sensor. while the foveation (pixel density) is digitally controlled, the pixel output is analog.

It is interesting that Prophesee is involved. I wonder if they have rights to use the foveated DVS?

CSIC seems to be the Spanish equivalent of CSIRO, so it is probable that they will license the technology.

As this is a DVS sensor it would be a natural fit for Akida with an appropriate ADC interface because its output is analog.


 
  • Like
  • Fire
Reactions: 13 users

TheFunkMachine

seeds have the potential to become trees.
  • Like
  • Fire
  • Love
Reactions: 24 users

Damo4

Regular
 
  • Like
  • Fire
  • Love
Reactions: 75 users

Damo4

Regular


Thought I read Uniden at first and nearly fainted.
Unigen will do for now
 
  • Haha
  • Like
  • Wow
Reactions: 20 users

Gemmax

Regular
  • Like
  • Fire
  • Love
Reactions: 33 users

Damo4

Regular
  • Haha
  • Like
Reactions: 11 users

Sirod69

bavarian girl ;-)
to be honest, I don't know anything about unigen, I just read it too, what exactly do they do? I can't find it in the stock market, but I think you'll know more about it tomorrow when I wake up.
I don't feel like it anymore today, it's midnight for me.
No Idea Idk GIF by CBC
 
  • Love
  • Like
  • Haha
Reactions: 8 users

Sirod69

bavarian girl ;-)
They last posted on Twitter before 2016. Please tell me that you are simply disappointed with Twitter or please tell me that this is still really great. I need this before Christmas, man
I Need Help Counseling GIF by TLC
 
  • Haha
  • Love
  • Fire
Reactions: 10 users

hotty4040

Regular
Hi Hotty,

Akida has 2 basic functions:
A. Classification/Inference;
B. Machine learning.

Using images/video as an example, classification is the identification of an object by comparison of similarity with classes of images in a model library. This is basically a guesstimate (probability).

Machine learning is the addition of new objects to the model library for future comparison.

The old CNN software running on CPU/GPU uses lots of Watts preforming MACs (Multiply Accumulate operations) which are maths heavy calculations. Akida operates on spikes indicating events (changes) which were initially represented by a single digital bit (now up to 4 bits in Akida 1 for increased accuracy). In a digital image, if adjacent pixels have the same illumination, then no event is registered, so no current is drawn by the associated transistors which is why Prophesee is a natural fit for Akida. On the other hand, in old style CNN, each pixel, which may be 16 bits is processed in a MAC processing matrix which involves 16*16 mathematical operations switching one or more transistors.

The new features of TeNNs and ViT are used for classification comparisons.

As we know Akida can be used with any sensor - camera, microphone, chemical sensor, vibration detector, ...

The EB can be used where signals from a number of sensors need to be classified.

This could be in a supermarket processing video images from all the checkouts to determine the price from a model and for stocktake purposes.

It can be used in a factory with many machines to determine if vibration indicates a need for maintenance.

View attachment 52162



https://brainchip.com/brainchip-previews-industrys-first-edge-box-powered-by-neuromorphic-ai-ip/

Designed for vision-based AI workloads, the compact Akida Edge box is intended for video analytics, facial recognition, and object detection, and can extend intelligent processing capabilities that integrate inputs from various other sensors. This device is compact, powerful, and enables cost-effective, scalable AI solutions at the Edge.

BrainChip’s event-based neural processing, which closely mimics the learning ability of the human brain, delivers essential performance within an energy-efficient, portable form factor, while offering cost-effectiveness surpassing market standards for edge AI computing appliances. BrainChip’s Akida neuromorphic processors are capable of on-chip learning that enables customization and personalization on device without support from the cloud, enhancing privacy and security while also reducing training overhead, which is a growing cost for AI services.

“BrainChip’s neuromorphic technology gives the Akida Edge box the ‘edge’ in demanding markets such as industrial, manufacturing, warehouse, high-volume retail, and medical care,” said Sean Hehir, CEO of BrainChip. “We are excited to partner with an industry leader like VVDN technologies to bring groundbreaking technology to the marke
t.”

Thanks Diogenese, for this enlightening explanation to my questions. This info helps enormously, to understand the EB's practical usages in the different markets that it could/will be introduced / demonstrated in. The first neuromorphic edge box in the market, should have some clout I would imagine, in future developments in many different areas of sensor technology. Let's hope so anyway.

It's Monday, so I'm finding it difficult as usual, to get started for the week ahead, more coffee will help hopefully.

Thanks again, and let's hope we can start to see some upward movement in the s/p into the new year. Your explanation has improved my understanding of the EB and the neuromorphic elements within it. Much appreciated. Onward and upward from here I'm expecting.

Love your ( many ) contributions on this forum, your knowledge is outstanding, and I'm sure welcomed by many investors in the Akida ( brainchip ) journey, that is continuing/progressing intriguingly. May it continue.

Akida Ballista >>>>> Onward and Upward <<<<<

hotty...
 
  • Like
  • Love
Reactions: 21 users

davidfitz

Regular
 
  • Like
  • Fire
Reactions: 13 users

Diogenese

Top 20

Hi Damo,

This is great news ... and this seems to be a longer term project which means that it may incorporate Akida 2:

BrainChip and Unigen Partner to Deliver Powerful, Energy-Efficient Edge AI Server - BrainChip


Laguna Hills, Calif. – DATE, 2023 – BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI IP, and Unigen Corporation, a global leader in the design and manufacturing of enterprise and industrial electronics, today announced a strategic partnership to deliver a new configuration of the recently released Unigen Cupcake Edge AI Server, a compact and powerful solution based on BrainChip’s Akida™ neuromorphic processor.

However, as with VVDN, there are competitor products in the Unigen Edge AI server stable:

Edge AI Server Market Grows | Unigen Cupcake - Unigen


Rapid data growth and the need to process it at the edge is expanding opportunities in the server market, with new entries from Unigen and Lenovo.

OCTOBER 2, 2023

By Adam Armstrong | TechTarget

Unigen’s Cupcake

Unigen’s newly released turnkey server is also a contender for computer vision use cases, according to the vendor. Cupcake is designed to be low power, low latency and high performance, using an Intel Elkhart Lake 4-core Atom processor that is matched to AI modules from partners including Blaize, Hailo, Degirum and MemryX. The modules connect to the server through an E1.S interface.
Cupcake can help conduct near real-time inferencing that can be used in security and safety use cases, such as shutting down manufacturing equipment that malfunctions, according to the vendor. While improving safety is important, specific uses have a limited return, according to Baron Fung, an analyst at Dell’Oro Group, an IT market research firm headquartered in Silicon Valley. A limited return wouldn’t prompt investment as much as broader use cases.
“Users will need the same platform that can be utilized across different applications [of the technology] that are similar,” Fung said. Companies are looking for application options from vendors rather than developing their own, he said. But as the market matures, there will be more demand for edge AI devices such as Unigen’s
.

I'm only familiar with Hailo. They do not use SNN. Instead they use MAC with some sparsity. However, this will be more power-hungry and slower than Akida.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 38 users

Sirod69

bavarian girl ;-)
ok this is Unigen on Linkedin but sorry now I am tired
Getting Sleepy Good Night GIF
 
  • Like
  • Love
  • Haha
Reactions: 18 users

buena suerte :-)

BOB Bank of Brainchip
ok this is Unigen on Linkedin but sorry now I am tired
Getting Sleepy Good Night GIF
Gute Nacht :) Zzzzzzzzzz!
 
  • Love
  • Like
Reactions: 5 users

Sirod69

bavarian girl ;-)
But I still have to say one thing, I love you all very much... and why do I love you? ... because I don't know you all ... but still ... I'm so in the red with Brainchip ... and that's why I love you ... because you've been with me for a long time with this investment .... Greetings from Bavaria, Germany, and now I'm going out
Valentines Day Love GIF by Hello All
 
  • Love
  • Like
  • Fire
Reactions: 43 users
Top Bottom