Any AI gadget fans out there?

I have been watching the rabbit r1 AI device for several weeks. This is being marketed as an “artificial intelligence personal assistant device”. I guess we are all somewhat familiar with large language AI models (LLM) like Siri, Alexa, etc. The LLM allows us to interact with a computer using everyday “human” language. The rabbit r1 utilizes a Large Action Model (LAM) that allows us (using regular speech LLM) to have the system perform actions or tasks on our behalf. For example, ordering something on Door Dash, or playing a song on Spotify. It’s interesting – the interaction isn’t through an API but appears to be more of an agent? TBH I don’t fully understand completely how it works.
This little guy does (mostly) what your cellphone will do (only slower) but appears to have been designed to detangle you from your phone, presumably shifting some of those functions to a different device. But I am not sure how carrying two devices will make my life any easier though.

What I in particular liked was how you can use it to summarize documents, etc. I read of one use case where a guy uses it to record meetings, and then has the AI summarize it for him. I saw another where they pointed it at a lease statement and asked if the landlord had to notify them prior to entering the apartment. The r1 read the document and told them “no – he doesn’t”.
The rabbit OS runs on a low-end android platform (that created a giant shit storm when that was discovered), has a SIM slot (data only) and uses Wi-Fi. It uses the MediaTek MT6765 processor. With that low end processor, no AI is taking place on the device itself (it’s all cloud based).

Anyway – a majority of the reviews lately have been unfavorable, however most of the complaints seem to be easily addressed with software updates. The main complaint is that the device appears to still be in a beta. Not all of the advertised functionality works at this time. For example they have 4 integrations (Uber, Door Dash, Spotify and Midjourney), but another 80 are said to be in the works. It sounds like of the 4, some still have issues.

The cost is $200 with no subscription fees.

image

1 Like

I saw this advertised on instagram. All the comments were 95/5%, negative reactions are the 95%, 5% in favor. MOST complaints referred to having purchased the device, having yet to receive it, WEEKS after others who had ordered theirs AFTER themselves (they sold them in block orders:1st block receives by this date, 2nd block a later)

Other negatives were more on the ignorant side claiming “what does this device DO for me, if/when it DOES arrive”. Though it’s being released (partially, out of promised order) it still has only vague abilities that are never fully explained, only random examples akin to those you mentioned.

For 200$, you would think more care/functionality explanations would be required before allowing purchase orders. It may perform uniquely beneficial tasks, but without communication from the creators… The roll-out of the devise seems premature on all fronts.

This may be great at some point, but I would wait for a better team to provide a product that delivers on their promises, while accepting money for something guaranteed to be delivered by an agreed time. This company seems inadequate,inexperienced and incompetent.

2 Likes

Anything based on LLM (Large Language Models) is completely unreliable. They don’t understand anything about what they’re doing; the model just knows how to put words together in coherent fashion based on what it’s been trained on, but it has no actual understanding.

Any time it produces correct answers, it’s an accident. Businesses that have tried to save money by replacing humans with this “AI” griftware have had to explain to customers that no, they cannot get free airline tickets and no, specific household chemicals as a medical remedy will kill you. And so on.

Maybe someday we’ll have actual conscious AGI. But right now what we have are sophisticated stochastic filters that are way better than the Markov chaining toys we used to play with in the 90’s that generated hilarious sentences. However, the results are no more meaningful.

I’m not just not a fan, I am actively opposed to the real harm imposed by using these in places they should not be.

Edited to add: wait a couple of years, and possibly this could all change. There’s been some rapid progress, especially in techniques that have nothing to do with LLM.

4 Likes

I think I agree with you 100% on this.
I have seen several examples where the AI response was completely incorrect.

I’m fearful that people could rely on this for medical advice, etc.

I’m confused….Let me know if I misread, but I thought your post seemed optimistic about this device?

I am optimistic about it but also a bit guarded.
I think it can certainly have some usefulness, but there are a lot of videos out there showing it misidentifying objects, etc. I guess as a gadget it interests me, but I see it more as a novelty or for entertainment.

I would like to get one just to check it out, and I probably will get one eventually but I think I may wait a generation or two and see what happens.

I think the other device, the Humane AI Pin, got ‘MKBHD’d’ … roasted on YT for being unready and overly slow. Then the Rabbit landed, and someone found out you could install the dev kit as an app and run it much faster on a capable Android phone.

2 Likes

lol - Marques Brownlee is one of the few tech reviewers I follow on YT. Love his work!

1 Like

The other channel I watch regularly is Cold Fusion TV. Their recent episode on AI entering the world of music was fascinating. Pattern recognition and iterative algorithms have gone much further than can be imagined. Also insightful: Many AI companies overhype capabilities leading to disappointment, while others are UNDER-promising (it’s a smart assistant! A new tool!) and actually jeopardising jobs.

2 Likes

One of the things that I have on my list to check out is Meta LLama 3 (or some future generation). It’s open source and there are already a couple of Android projects where you can even run it locally on your phone. Building a phone app around that would obviate the need for separate standalone gadgets.

2 Likes

Rabbit R1: The hardware itself is pretty disappointing. There are significant usability issues there that can’t be fixed by just a software update. And even if it did everything they promise (which it currently cannot, and is pretty far away from doing so IMO), I still don’t see why it needs to be a separate device and not just an app or something tightly integrated into the phone’s OS.

LLMs: In general, there is a split consensus right now - one camp believes that understanding is an emergent property and by making the model larger and training it on more data, it will get better at actually understanding things. The other side believes that we need a completely different model architecture to achieve understanding, one that is yet to be discovered. To me, this is the most likely case. Larger models + more training data + more compute will keep making them better, but we will probably hit the limit pretty soon (less than 5 years, I think).

That said, it’s pretty exciting how capable current LLMs already are and with the right guardrails, they can do some tasks amazingly well.

3 Likes
2 Likes

Yikes! Glad I didn’t pull the trigger

1 Like

We have a few Grace Hopper GH200s that we’ve been working with. LLama 3 screams on those machines.

1 Like

I was just about to post this video, I’m glad that Patrick didn’t get it! Those $200 could be better used somewhere else, like for example buying half of a mechanical pencil :laughing:

AI is a massive thing that has been revolutionizing every industry for years, but AI and LLMs are not the same. LLMs are the most impressive but less useful AIs out there in my opinion. I would be very wary of anyone trying to sell us something innovative with LLMs right now. The truly useful AIs run on the backstage making things run faster, better and cheaper (see: computer vision, image classification, assisted design tools etc)

Oh on this note, if anyone’s trying to learn a language, LLMs are splendid for that. The whole objective of a LLM is to make human-like natural text, so if you put a text you wrote in ChatGPT (for example) and ask it to make it sound more natural, the results are great and you can learn tons of things that way. It’s like having a personal native language teacher 24/7.

3 Likes

About half my workday involves me attending meetings. And about half of those usually do not pertain directly to me or my department, and rarely require my input. I read one use case of the r1 where a guy had it listen in on his meetings and then the r1 summarized the call. I NEED this in my life lol.
This in particular is what attracted me to it, but it sounds like I’d be better off waiting a bit.

1 Like

At this point in my career I refuse to attend a meeting that I’m not needed for or for one that doesn’t have an agenda and expected outputs. I’ve had several roles where it’s constant meetings with no real agenda, no one taking notes, no action items, no follow up, wasted resources, etc. It’s become a pretty big pet peeve of mine. If I need a special device to summarize a meeting I’m not fully engaged in then that’s just sad.

Sorry, rant over.

3 Likes

Oh I completely get it. The issue here is that I am attending these because so my sr. director and VP won’t have to.

I was voluntold to do it :grin:

2 Likes

Oh you can do that with your phone I believe? I’m not sure which LLMs have that functionality but I’m sure either ChatGPT 4 or the new one with Scarlett Johansson voice (lol) the GPT 4o can do that. It’s a pretty basic LLM application at this point. Zoom’s assistant also does that automatically (so that they can also put your conversations in their private records more easily lol. PSA don’t discuss confidential things over zoom).

Coffeezilla released yet another video about the scam on the rabbit

2 Likes

thanks! I’ll look into that. That explains why we quit using Zoom as an enterprise solution (potential HIPAA violations I guess)