Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Karaoke for any song in any language (github.com/youkaclub)
126 points by youka on March 7, 2020 | hide | past | favorite | 51 comments



Demo: https://peertube.co.uk/videos/watch/3c183b56-deb6-4e6b-a7a2-...

edit: Swapped youtube URL to Peertube for Content ID claims issues.


thanks but the video is unavailable


Ah yes. It got copyright claimed for "Love Me Do The Beatles" even if it's in Public Domain in Europe and Canada.

Classic Youtube.


super impressive! The UI seems pretty intuitive. Great work


I installed this on Mac OS but the program always fails with:

  Uncaught Exception:
  Error: Could not get code signature for running application
      at m(/Applications/Youka.app/Contents/Resources/app/.webpack/main /index.js:1:12481)
    at App.<anonymous> (/Applications/Youka.app/Contents/Resources/app/.webpack/main/index.js:1:14365)
    at App.emit (events.js:215:7)


Just reopen it and you will be fine (I don’t have free 99$/year for apple code signature)


I wish the readme had a description of how Youka works. Looks promising, but I’m not sure it does what I think it does.


I'll add some explanation soon. Here's the main process:

Search your query in YouTube using https://github.com/youkaclub/youka-youtube

Search lyrics using https://github.com/youkaclub/youka-lyrics

Split the vocals from instruments using https://github.com/deezer/spleeter

Align text to voice (the hardest part) using some private api


> Align text to voice (the hardest part) using some private api

That's also the part that would be most interesting to have explained. Is it language-agnostic? After all, the title says "in any language", but I can't think of any text-audio alignment algorithms that don't require a language-specific model. (Unless you just count characters and assume they map linearly to time, which I'd expect to go very badly.)


Having worked for many years in a linguistics research lab where we spent a lot of money paying people to edit and align subtitles and audio transcripts, and having largely written what was at the time the most sophisticated subtitle-and-transcript editing tool available, I can confirm: counting characters and mapping them linearly to timespan, even after isolating vocals, does indeed go very poorly. And much worse when there's singing involved.


So let’s play, if you can guess the align method I’ll open source it :)


Alternately, since you say speech recognition isn't "even close", I might try going the other way--doing text-to-speech on the audio stream, attempting to align the two speech tracks, and the back-porting the timecodes from audio alignment onto the text.

But that seems a lot more complicated... so, unlikely.

A way to cheat that would probably work good enough most of the time would be to spectrographic analysis on the audio stream to identify syllables, and then similarly just count syllables in the known text and line those up. That works better the more consistent your spelling system is, though, and still requires language-specific modelling. If you actually want to do a decent job cross-linguistically, you'd need in the general case a dictionary for every supported language listing syllable counts for each word (because not everybody's orthography is transparent enough to make simple models like counting character sequences work).

If you actually have a fully language-agnostic algorithm for aligning text to audio that's actually decently accurate, though, that's gotta be worth at least a Master's degree in computational linguistics, 'cause on the face of it it doesn't seem to me (who has such a Masters degree) that it should even theoretically be possible.


You are close enough, so I have to respect my word. I’m not a genius, just a lego builder, I’ve tried a lot of methods, from DL to ML but aeneas project (with some optimizations) gave me the best results. Amazing project and even better personality. Take a look at https://github.com/readbeyond/aeneas Together with espeak-ng, you can get good results for line level alignment for 108 languages.


Good to know that aeneas works reasonably well even for sung speech. I've tried using aeneas for LibriVox audiobooks (10+ hours), which failed because it tries to load the whole file into memory at once and then compute FFTs on it all at once etc., which I don't have the RAM for. So right now I'm Rewriting in Rust™ using iterators to hopefully reduce memory usage and improve performance.

Espeak-ng supporting 108 languages is maybe a bit misleading. They have pronunciation definitions for many languages, but the actual level of support varies widely.

For Mandarin, espeak-ng 1.49.2 has a bug where it reads the tone numbers out loud instead of modifying the pitch contour, so e.g. the number 四 (four) is pronounced si si instead of , because it has the fourth tone. That's the version packaged for Ubuntu, so you may be using it for your API.

For Japanese, kanji aren't supported at all, so 四 is pronounced as "Chinese letter" (in English). For proper Japanese support, you'd need to switch to a different TTS engine like Open JTalk or preprocess the text to transform it into kana.

Also note that Aeneas is licensed under AGPL, which requires you to offer the source code if you let others interact with the program over a network (which is what your API does). So your attempt to keep the secret sauce private and only reveal it once someone guessed the algorithm was likely illegal. You should add proper copyright notices to your program and audioai.online


Thanks for your reply, I’ll add copyright notice soon. I didn’t really tried to keep it private, otherwise I could just ignore the question. The reason that I didn’t cite it in the first place, it’s because I’m still testing few alternatives.


Ah! It's not even trying to do word or syllable-level alignment. Well, that makes the problem considerably more error-tolerant. And they specifically call out ASR-based aligners as more accurate, so that makes me feel good about myself! Still, that's a cool project; thanks for pointing it out. I shall have to dig into it and see what they are actually doing.

Even with only line-level accuracy, that would've been nice to have 7 years ago... but I see the first commit to the project is only in 2015. Might still be useful to some of my old colleagues, though; I'll have to see if they've heard of it.


I was playing with Aeneas and I didn't really find it THAT accurate, using Syllabification-by-Analogy and then doing some optimizations like matching choruses or verses which are repeated yielded interesting results. When I was doing this then spleeter and demucs and other vocal isolators weren't out so I should probably have another go with those...


Based on your experience, which alignment method/system is the state of the art? (I’m looking for accurate word/syllable level alignment for Youka)


I actually have very little direct experience with automated forced alignment; I have enough experience in the space to know that naive approaches suck, but back when my boss was paying people to do manual alignment most of the effort went into second-language subtitles for pedagogical studies... which means the text doesn't actually represent the same words that are in the audio, because they're words in a different language, and nothing would do a good job of accurately aligning that! So I got very little support for building in a more sophisticated auto-alignment system.

My intuition, however, is that a meet-in-the-middle approach using automatic speech recognition and then aligning the resulting text streams would be the optimal approach, and indeed every other major forced-alignment tool besides aeneas (https://github.com/pettarin/forced-alignment-tools) does seem to use that approach. The catch, of course, is that you actually need decent ASR language models for every target language to make that work, and gas you can see from tat list, it is rare for any given engine to support more than a few languages; CMU Sphinx probably has the widest support, although it's not the highest end toolkit for popular languages like English. So, if you really want to maintain the broadest possible language support, and you can afford the API fees, building a new alignment engine that piggy-backs on MicroSoft or IBM's speech recognition APIs is probably the best option--or, to keep it cheap I'd go ahead and use Sphinx's aligner as a preferred option for all the languages that it has models for, and either fall back on aeneas for remaining languages, or (if you can afford occasional API calls to commercial services for the occasional less-popular language) upgrade to MicroSoft/IBM services for the remaining languages.


I’ve tested every single ASR alignment solution that mentioned here https://github.com/pettarin/forced-alignment-tools, but they all performed poorly compared to Aeneas, even with good language models (English)


Interesting....


My little secret hack for better tts results is to make the singing sounds like speaking. Currently I’m using Sox pitch filter, do you have another idea how to achieve that?


Sorry, that's where my expertise runs out. I could tell you all about analyzing the linguistic structure of the text, but my experience with audio processing is limited to reading spectrograms and trusting other people's ASR tools.


How's the performance on some of the harder songs to align? When the voice is too melodic or the characteristic high female pitch that can get mistaken for instruments? Something like Royals - Lorde maybe.


I preprocess the vocals using Sox, so the female singing become more like male speaking


The way I'd do it is to use an existing speech recognition system with a large number of language models available (like CMU Sphinx--but probably not CMU Sphinx, 'cause I don't think there are decent openly-available models for 108 different language for Sphinx; maybe MicroSoft's Azure speech to text API or IBM's Watson speech recognition or something like that) to produce a rough transcript with timecodes, and then meet in the middle--use the timecodes from speech recognition, and the known-good text from whatever lyrics you already found, and reduce it to a text-to-text alignment problem so you can match up the ASR timecodes to the known-good text. First pass, I'd probably try an LCS match on the two text streams, but if that wasn't good enough, I'm sure there are better algorithms in the bioinformatics literature.


Speech recognition?


not even close


Method that can be algorithmically reduced to the FFT? In Big O terms at least?


Examining the source, it looks like alignment is done via an HTML form data submission to 'https://api.audioai.online/split-align'. Manually visiting that website, however, is not very informative... the entire text of http://audioai.online is

  Audio AI API
    Split voice from audio
    Sync voice to text
  contact


You can use spleeter and align in any audio application.


The question is, how can you do it automatically..


Hey there! First of all, I want to tell you that the app is fantastic. I used the earlier version of this, when it was a website, from your previous HN post. And once again the alignment works quite well in my experience, as does the isolation.

In the future, it would be great to have a "portable" version of this for Windows that doesn't install anything. It's annoying to open up an app, and have it install itself without any warning or user consent. You could just release a .zip file with the build as an option.


I’ve considered few options to install ffmpeg, and choose that way. I’m open to other suggestions


You can distribute a .zip file which includes the statically linked build of FFmepeg: https://ffmpeg.zeranoe.com/builds/ . Then just call it locally. There's no need to install it system-wide.


I don’t install it system wide, just download a single binary into youka directory.


I get an error when trying to open any video :

Ooops, some error occurred :( Error: [Errno 2] No such file or directory: '/tmp/tmpphtr8ehu/accompaniment.aac'

When running on the official Windows 10 SandBox (https://techcommunity.microsoft.com/t5/windows-kernel-intern...)

Edit: it somehow works for some songs. The concept is really nice. I love it.


Looks like a server-side bug (can't really handle more that a single split process concurrently), I'll add queue in the next version.


Personally I love karaoke, but looking at the repo and the website gave me no information whatsoever about this project. Maybe that's something you can work on? In the meantime I found this article, which reads quite positive: https://www.theverge.com/tldr/2020/2/19/21144452/youtube-you...



You right! I'll add illustration gif soon.


Cool! So you were originally running it as a webapp, and then decided to open source it? Presumably due to legal reasons?


exactly


Is there a way to manually provide the lyrics? I have a substantial collection of songs in Chinese and Taiwanese, and it would be really helpful to use this to help me make lyrics videos for Pingtype. When I tried, I got this error:

Ooops, some error occurred :( Error: name 'espeakng_supported_langs' is not defined

I'll look into aeneas to see if that can give the API-level technical tools that I need - thank you for explaining that part in the other comments!


Note that it won't work for Taiwanese (I assume Hokkien) unless you add the necessary support to espeak-ng.

If your lyrics are in Peh-oe-ji, you'll need to define how the romanization maps to phonemes. You may be able to get some inspiration for that from the definitions for Mandarin and Cantonese. Though I just looked at the "phonology" section on Wikipedia https://en.wikipedia.org/wiki/Taiwanese_Hokkien#Phonology and the tone sandhi rules look a lot more complex than any other Sinitic language I know.

If the lyrics use Chinese characters, there's the added difficulty of collecting a pronunciation dictionary, which I'd probably do by scraping https://twblg.dict.edu.tw/holodict_new/index.html , http://xiaoxue.iis.sinica.edu.tw/ccr/ and Wiktionary. (If you know any other sources for pronunciation data, I'm interested.)


Yes, I know about romanisation! I wrote Pingtype, and extracted romanisation dictionaries for Taiwanese Hokkien and Hakka by parsing Bible data.

https://pingtype.github.io

Tones are difficult, so I encode those as colours. Adding code to espeak-ng sounds very difficult. Most of the songs are in Mandarin though, so I'll try those first.


oh, I had the same idea and started working here https://github.com/redraw/karaoke-machine days after Deezer's spleeter was released, but stopped while searching for a way to sync the lyrics. thx! I'll try it out



From what I understand, it is software for you to align lyrics to music contained in a video, with tools to enable you to do so.


Youka aligns lyrics automatically, you have left nothing to do


Thanks - that sounds great




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: