Nice job. I'd recommend "publishing" some bilingual books that have already been made on the platform as a better pattern for discovery since content creation requires signing up.
FWIW translating into "major languages" isn't necessarily much help. If I speak one of the supported languages, say, Vietnamese, me being able to look at the German/Chinese/French is useless because 1) I can't read those, I already said I read Vietnamese and 2) the translation models often vary widely between languages so doing well on French doesn't mean they will do well on something else.
> 2) the translation models often vary widely between languages so doing well on French doesn't mean they will do well on something else.
Especially this. I often come across AI products that claim to do well in 100+ languages, show some really good DE/FR/SP/RU examples, then I try it with my language (Slovene) and am just disappointed. If you claim to support all those languages, please have a sample result in all of them. Even if they aren't all equally good, it comes across as more genuine than making bold claims that anyone who speaks a language with < 10 million speakers knows likely aren't true.
Word Error Rates (WER), are the standardised metric for understanding that. The number is the number of words you would need to add, remove or replace per 100 words (so lower is better). Spanish these days is often down in the 2-3 region, English in the 4 region, and Finnish in the 7-8 region. Slovenian is typically 15-20, and Punjabi, Bengali and other SE Asian languages hit at 40+.
By far the biggest factor seems to be the amount of translated material in that language, followed by how "obvious" the rules of the language are making it easy to decompose algorithmically.
I think that's why calculating and showing the accuracy metrics (like WER) against known good translations, is useful. You can even highlight words that the model struggles with, giving the reader useful context.
Translation models are getting better all the time - it's a weird artefact of transformer architectures that got missed in the GenAI hype, that they're pretty great at translation, especially across languages with smaller training corpuses - but you should definitely know if the text you're reading is only likely to be 90% "correctly" translated.
I think, judging from your responses in this thread, that you're focusing too much on the 'generating stories' part. IMO that's the least attractive/useful part of your offering. The 'read something in two languages' is what use useful to your users. I made something not quite like your app but related: a tool to translate (epub) ebooks into two-column ebooks with a translated version on one side, so you're not constantly googling/chatgpting things. What you have has the potential for more interactivity though (like, my dual language ebooks can't highlight words to match from left to right or vice versa). I would love your tool if it wasn't for the 'the content is AI generated' part. I'm not looking to add more AI slop into my life, but that doesn't mean I'm against it for actually useful purposes, like translations/language learning.