2nd round in "AI in a box" or a review worth a watch. It's not as catastrophic as the Humane Pin, but that doesn't mean it's good. Looks like good ideas and great products won't come from wrapping your "AI assistant" on real innovation - LLMs - and put it in a box. Sure, LAMs can be interesting, but it's not there as a feature on launch day. Not to talk about other limitations from battery life to the lack of touch control etc. Curious to see what Apple / Google will come up with in the segment.
Axel Halász’s Post
More Relevant Posts
-
As Marques Brownlee points out, today's AI products aren't really products at all — companies are asking people to pay large amounts of money for something that does not work as advertised, and which might possibly work at some unknown point in the future, if we're lucky. What a waste. It's so weird to work on the science of tech safety during such a brazen and dramatic race to the bottom in quality. If AI were cars, it would be the equivalent of working on seatbelts when manufacturers were forgetting to add wheels. The sad thing is that AI doesn't have to be this way. When I was at SwiftKey, we shipped working software to hundreds of millions of people in ways that made reliable promises, met practical needs and also improved our machine learning models from the feedback we received. And we were far from unusual. The truth is that today's AI companies and their leaders are *choosing* to ship broken systems, that fail to work as advertised, driven less by engineering than by false hopes in the power of hype, market share, and extractive data practices.
Rabbit R1: Barely Reviewable
https://www.youtube.com/
To view or add a comment, sign in
-
Will standalone AI devices actually take off as a consumer product?
Rabbit R1: Barely Reviewable
https://www.youtube.com/
To view or add a comment, sign in
-
I saw this on YouTube this morning and watched Matthew Berman's review yesterday. Matthew, overall, really likes this, though he notes it is not performing all four of the tasks as advertised. The takeaways I took from both reviews (and I'm still looking forward to receiving mine in June, here in the UK) are that: (Pros) it's smaller than expected, very well made, and answers research questions very quickly (though is not immune to halucinations, it seems). (Cons) it cannot perform most of the functions advertised in January by their founder Jesse, the battery life is ridiculously tiny, there is no back button, and the (TFT) touchscreen can (at this moment in time) only be used for the onscreen keyboard, which is utterly crazy, in my view. What really irks me is that it cannot (yet) interface with your calendar, create tasks/to-dos or send emails. That's very strange for an AI personal assistant. But weigh this against the $200 price tag and the fact that there are no service charges or subscription fees. This is a very affordable gadget/assistant and, as was mentioned in the video you cite above, this is not unlike the method used by Tesla to get huge amounts of real-world training data from early adopters.
Founder and Chief Learning Architect @ Linda B. Learning | Impactful, Innovative Learning Technology Solution Expert
In another case of "this is what I asked for, this is what I got," Marques Brownlee reviews the Rabbit R1 AI in a box. Is it just me or is anyone else over tech companies crowdsourcing their testing on paying customers? #ai #virtualassistant #aiinabox https://lnkd.in/gEQD_Zxp
Rabbit R1: Barely Reviewable
https://www.youtube.com/
To view or add a comment, sign in
-
These AI device companies have exciting concepts but of the main two the Humane PIn or the Rabbit R1 neither are ready for the market as this review shows. Do people think they should release before the product is ready or wait? #ai #devices https://lnkd.in/e-TJYvDX
Rabbit R1: Barely Reviewable
https://www.youtube.com/
To view or add a comment, sign in
-
Here it is another AI product that gets mixed/bad reviews. Certainly not as bad as the UI Pin, the Rabbit R1 is still too early, not yet an MUP (Minimum Usable Product). I am excited for these kind of products that are innovating an oversaturated app-based market, but Marques Brownlee is actually giving a good advice: "Buy a product for what it is now, not for what it promises it will be in the future". That was the whole point with minimum viable (or usable) products. Make a product that solves a well defined huge problem for a specific niche of people, and build on top of that. I think it's ok to sell non-refined products, as long as they are still solving that well defined huge problem. To understand what the MVP is, is one of the first biggest obstacles for every startup. This being said, I would still like to see more companies working on AI products. What about you? #ai #rabbitr1 #aipin
Rabbit R1: Barely Reviewable
https://www.youtube.com/
To view or add a comment, sign in
-
Is AI lowering the bar for product quality? How many times over the last year or two have you encountered this situation: on a landing page about a product, spaceships roam the expanses of the universe with AI under the hood, and when you try the product itself, it turns out that there is a foot-powered scooter with the simplest use cases and promises of improvements in future? I realized that I had been in this situation many times. The idea was prompted by a review of the Rabbit R1 gadget. This is a box with AI inside, the manufacturers of which promised a bunch of features. In fact, it can answer simple questions and works clumsily with several applications. At the same time, the developers are asking for the full price, promising advanced features such as voice instructions for working with any application someday later. Well, at least the price is “only” $200, not $700 like the analogue from Humane AI. By the way, Marques Brownlee also has a review of Humane AI. Titled "The worst product I've reviewed... so far" 😄 Apparently, the point is in the early stage of development of AI technologies and their specifics. To make AI work well, you need a lot of data. To get a lot of data, you need many users with their unique behavior scenarios. To get many users, you need a quality product... This turns out to be another chicken and egg problem that many models have. The advice Marques gives at the end of the issue, and I second it, is to choose products based on what they already do, not what their creators promise in the future. An old tip that becomes even more relevant with the widespread use of AI technologies! ▶️ https://lnkd.in/ecvWFtxh #aifuture #gpt #ai #chatgpt #chatgpt4 #aiadoption #artificialintelligence #artificialgeneralintelligence #artificialintelligenceforbusiness #artificialintelligenceai #artificialintelligencefordesign #tech #it #techtips #technologies #skills #techcommunity #tools #freetools
Rabbit R1: Barely Reviewable
https://www.youtube.com/
To view or add a comment, sign in
-
■ Our #Data Is What They're Looking For - Reflecting on #RabbitInc's Strategy and the Implications of Early #AI Releases ■ After watching Marques Brownlee's insightful review of the #RabbitR1, it's clear that RabbitInc's launch strategy is about more than just marketing. By entering the market before slower, larger tech giants, they aim to embed their AI into users' daily lives, collecting vast amounts of personal data to enhance future functionalities. This approach highlights a systemic shift in tech priorities - from selling devices to securing a spot in our daily routines through AI-driven personalization. This strategy, however, raises significant ethical questions about privacy. The era we're entering will harvest data not only on our choices but also our interactions and emotions, pushing the boundaries of personal privacy. As AI technologies like #ChatGPT evolve to retain and utilize our information for deeper personalization, the implications stretch beyond convenience. The necessity of our data in shaping these interactions underscores a pivot towards AI systems that understand us on an unprecedentedly personal level. Yet, this introduces a crucial trade-off: the advantages of personalized AI assistants come at the cost of our privacy. Those willing to share their data might find themselves at a technological advantage, leveraging AI for superior efficiency in daily tasks. Despite these benefits, the potential for a middle ground exists. Keeping personal data localized on individual devices could mitigate broader privacy risks while still harnessing AI's benefits. This solution requires robust dialogue about innovation, user autonomy, and privacy protection. As we navigate this landscape, the choices made by companies like RabbitInc will likely set new norms for privacy and personalization in technology. Balancing these aspects will not only dictate the trajectory of consumer technology but also shape broader societal norms around privacy and data use. https://lnkd.in/dcKQ3uWg
Rabbit R1: Barely Reviewable
https://www.youtube.com/
To view or add a comment, sign in
-
I've done a brief video on SBERT embedding model used with Kernel Memory. It is part of my series about Kernel Memory and Semantic Kernel, you can watch in my youtube channel. #ai #sbert #embeddings https://lnkd.in/dw33jaW5
KernelMemory - Some suggestions on how to choose a local Embedding Model.
https://www.youtube.com/
To view or add a comment, sign in
-
Good blog post from Lorin Hochstein... Lorin makes the point that quite often things that we assume are permanent are, in fact, not so. This is true in software development, as pointed out in the post, but it's a good reminder that things in life are rarely set in stone. What's considered 'ground truth' at one point in time may be completely upended by new discovery or changes in consensus. We're seeing this constantly today with AI and the rapid evolution of capabilities, both perceived and real. The key is to maintain pragmatic adaptability. In the face of constant change, we must build systems and processes that are resilient and flexible. This means making assumptions explicit, so they can be readily reevaluated. It means designing for modularity, so components can be updated independently. It means cultivating a mindset of continuous learning, so we can quickly integrate new insights. But adaptability isn't just about reacting to change; it's also about driving it responsibly. AI practitioners have a responsibility to proactively identify and mitigate risks, to engage diverse perspectives, and to ensure outcomes align with societal values. It is important to slow down and think deeply, even as things evolve rapidly. Change is constant. Pragmatic adaptability, guided by strong ethical principles, is critical. #AI #change #adaptability #ethicalAI https://lnkd.in/e3jh6tNa
To view or add a comment, sign in
-
“It’s become abundantly clear over the course of 2024 that writing good automated evals for LLM-powered systems is the skill that’s most needed to build useful applications on top of these models. If you have a strong eval suite you can adopt new models faster, iterate better and build more reliable and useful product features than your competition” https://lnkd.in/dBmtrvUZ
Things we learned about LLMs in 2024
simonwillison.net
To view or add a comment, sign in
Co-Founder of Altrosyn and DIrector at CDTECH | Inventor | Manufacturer
8moThe concept of "AI in a box" raises questions about the balance between innovation and practicality in AI product design. While leveraging LLMs in such applications can offer intriguing possibilities, limitations like battery life and interface control highlight the challenges of integrating advanced AI capabilities into consumer devices. Considering the evolving landscape, how do you envision future iterations of AI-in-a-box products addressing these limitations while continuing to push the boundaries of AI innovation within practical consumer contexts?