Key is to position the product evolution from this point as a journey, with discrete steps/periods on a quasi-roadmap, and make it clear to them that they as users/clients will help shape that journey.
So picture a slide with 4 steps/phases enumerated, and a talk track like: "Today, we're at Phase 1 which provides the most important feature XYZ but has some constraints around diction; next, in Phase 2, the algorithms will allow for even mediocre diction and feature ABC will also come online deliver 123 value; in Phase 3, we'll have enough data that diction is a non-issue, and features DEF and FGI are planned".
So the core question is - does Phase 1 have enough value? Is that journey/roadmap worth it for the customer putting up with limited value today?
You might be surprised either way. If even the limited experience is still super valuable, great! Or you may find that you DON'T have an MVP yet since it's not usable in its current form regardless of the journey from here.
Many insightful comments here in this thread, thank you all very much. I'm really afraid we can't solve a "mediocre diction" problem. The problem is that speech to text still is a open subject where noise and diction could ruin many tools, but in the ideal world (no noise and good diction), accuracy is around 80-100%.
> in order to have accuracy and a fully satisfactory experience, it requires some conditions, because it deals with random variables that I still don't have full control over
> I'm really afraid we can't solve a "mediocre diction" problem.
So you're not asking "How can I present my work-in-progress to potential customers so that they focus on the potential & not the missing features?".
Instead your asking "How can I present a product that has (and always will have) limitations?" and so will never have a "fully satisfactory experience". Is that right?
I think this is an important distinction (that I didn't get from reading your original post).
Sorry, and yes you're right. So the question is: how to present a product that will always have some kind of limitation? It will solve a problem but under certain circumstances. But I think the answers here still apply.
Be fully honest. At least with me, it adds points if a supplier communicates limitations well, even without me asking for it. Being promised the end to all problems raises my skepticism 10fold.
Feedback, in the UI sense: one of the biggest annoyances for me is using Google Auto. I issue a voice command and it says "Sorry, I don't understand 'navigate home'". Clearly, the audio process was fine, clearly the voice-to-text interpretation was fine, 'navigate home' is a command it recognises, so I deduce there was some back-end error but it's not specified in the applications feedback. It seems it would be super easy to check if the phrase they're telling the user that they don't understand was in a list of common phrases that the application handles. "Our app works best in a brightly lit environment, with microphone noise filtering turned on, and a clutter-free high-contrast background".
Like, I wish computer games came with (free before you buy!) simple benchmarks; "if you score above 100 on $benchmark you should be able to run at 60fps on default settings; above 200 75fps on max". Then it's really easy to know things are working or if you need to monkey around with drivers and settings and such.
IMO if you're struggling to get good input then you should score the input against the characteristics you want to improve, like "30% for audio quality, 80% for recognised commands, 50% for gesture quality;" change the score on the fly so users can easily tell if they're doing it right?
What about clients who "want that one feature that isn't present currently but you show your future roadmap does so they will wait till you get there? "
There is a chance without revealing your roadmap, they "mught" have gone with the current version, like you make a pitch and they like it and want to buy but then you show your roadmap and they are like "oh nice. Let's wait till you reach this" ?
This!! I worked for a startup and we were always chasing the "I would buy if you had this feature..." feature for every company who didn't yet buy the product. It was a total mess. And worst of all most companies didn't buy the product once we implemented the exact feature they wanted.
This is great example of setting expectations. Plus I think getting the answer to "does Phase 1 have enough value?" is super important early on, even if the answer is no.
So picture a slide with 4 steps/phases enumerated, and a talk track like: "Today, we're at Phase 1 which provides the most important feature XYZ but has some constraints around diction; next, in Phase 2, the algorithms will allow for even mediocre diction and feature ABC will also come online deliver 123 value; in Phase 3, we'll have enough data that diction is a non-issue, and features DEF and FGI are planned".
So the core question is - does Phase 1 have enough value? Is that journey/roadmap worth it for the customer putting up with limited value today?
You might be surprised either way. If even the limited experience is still super valuable, great! Or you may find that you DON'T have an MVP yet since it's not usable in its current form regardless of the journey from here.