Hacker News new | past | comments | ask | show | jobs | submit login
The New Old Way of Learning Languages (2008) (theamericanscholar.org)
108 points by Tomte on April 17, 2022 | hide | past | favorite | 38 comments



This is correct. Many highly efficient language learners, like polyglots who learn languages as a hobby, know about this forgotten wisdom.

Along those lines, there's a small French 100+ year old language learning company called Assimil [1]. They're universally beloved by polyglots [2] [3]. They make courses out of clever dialogues with your First Language on one side and your Target Language on the other. The courses come with audio for your Target Language. They're cheap, around $50-$100, and packed with enough information to bring a total novice to respectable intermediate status.

I used their English-Spanish course to learn Spanish extremely quickly. After a few months of study, I was able to watch native Spanish TV programs with Spanish subtitles and understand 80-90% of them. I would emphatically recommend their method to anyone. I am not affiliated in any way with the company.

There's also the Listening-Reading method, which many polyglots swear by, once you've gotten your bearings in your Target Language. It's a rigorous but fun method of learning your Target Language by enjoying audiobooks of your choice.

Incidentally, it is wise, if you're serious about learning a language quickly, to seek polyglot forums and to use their accumulated knowledge to smooth the path ahead of you.

[1] https://www.assimil.com [2] https://www.youtube.com/results?search_query=assimil [3] https://www.google.com/search?q=site%3Alanguage-learners.org...


Any polyglot forums your recommend?


my 2 favorites:

https://forum.language-learners.org has excellent polyglots

https://www.reddit.com/r/languagelearning is good, though it has more beginners. however, it has a dramatically better UI

I often use Google's "site:" feature to search them

I eventually discovered that, while everyone has their idiosyncrasies, the best learners share 80% of their methods and mindsets in common. I also noticed things that cause failure by paying attention to what struggling students were saying and what the strongest students were politely chastising them for


FWIW I would totally read your blog post about that if you wrote one!


Thanks!


https://refold.la has a good community.


The single best language learning resource I found when studying Japanese was Mangajin:

https://en.wikipedia.org/wiki/Mangajin

They did both English comics translated to Japanese:

https://i.redd.it/x7f4zzk9dff01.png

And Japanese manga translated to English:

https://www.pinterest.com/pin/mangajin-48-in-2021--382806655...

With word for word translations, romanized transliteration, and extensive notes. They also had lots of articles on Japanese culture, advertisements, etc.

I couldn't wait for the next issue to come out. I wonder if there's any equivalent for learning other languages through comics?


Agreed that reading with readily-available translations is the most important method of learning, but with technology we can do better than the interlinear texts this article describes, which seems to just be foreign texts where each word has an English translation on top. I see several limitations of this approach:

- The English is hard to not see, so it's too easy to accidentally cheat and avoid practicing active recall.

- Word-for-word translations aren't effective when your current language and target language are sufficiently different.

If you're willing to read on a screen, we can instead have only the foreign text visible by default, but have translations (both word-for-word and sentence-for-sentence) readily available on click or hover. Du Chinese (no relation, just a fan) does this really well for Chinese, and I'm sure there's equivalent tools for other languages.

Still, any American-born Chinese knows that being able to understand a language doesn't mean you can speak it yourself. Du Chinese is great for reading/listening practice but I couldn't find any counterpart for writing/speaking practice. So I'm working on my own counterpart that works sort of like Du Chinese but in reverse:

- Start by showing you English sentences, of progressively increasing complexity

- Click a sentence to reveal its Chinese translation

- Hover over any word of the translation to get the word's definition and etymology

- Along with the translation, a full grammar explanation

I haven't released this side project yet but I've been using it myself, and I feel like this combined with reading practice is close to optimal learning efficiency.


Yeah, reading is not writing. I can partially read a lot more languages than I can writers


I'd add that writing on a computer is also not writing, for some languages. E.g. with Japanese IMEs, writing is an exercise of knowing how a word is pronounced and being able to visually distinguish homophones. Having not practiced writing on paper in a while, I can barely write anything in Japanese while I can type just fine (or swipe, on a smartphone).

Similarly listening and speaking are different things. I can read or hear a lot of words that wouldn't cross my mind when I speak.


My Spanish, German and French are rusty as hell these days. But other commenters have touched on the "side-by-side" method. Oddly, Asterix, Lucky Luke and Bob & Bobette all had editions in dual languages. And of course, you could just get the English language edition and the French edition and read them side-by-side yourself. It was great! The really awesome thing was that during some recess sessions we would be sent to the library, or if a teacher wasn't available to take the class due to illness, or it was raining outside during recess, and the rule was, you could take any book you liked to quietly read, but you couldn't take the hardback comic books. The ones in English. But you could take the foreign language versions. And if you sat near the substitute teacher with it, you could ask for the English language version too, to help with translation. I remember many hours in the school library reading all those comics in French and German.


I’m a fan of the “inter linear” system (actually the Loeb editions that have one page in Latin or Greek and the other page in English. Forbidden in high school of course).

I attended a high school that taught Greek and Latin (required) as living languages: written and spoken usage right from the beginning no different from French (also required).

The nice thing about them being active languages rather than fossilized ones is that various teachers (even a couple of English teachers and even one history teacher iirc) would just use them as a language in a class, school assembly etc. Of course I never heard a student do so!

I recognize now that I understand the classics we read far more deeply than I would have in translation. Whether that understanding is important, of course, depends on the reader. It is important to me.


I think it's about time that people realize that learning grammar early is very detrimental in language learning process. As mentioned in the article "Nevertheless, Hamilton did not oppose the study of grammar, only its timing. “The theory of grammar should be taught only when the pupil can read the language, and understand at least an easy book in it”."

As an example in most of the Muslim majority countries, Arabic is thought as a second or third language. Apparently Arabic grammar is probably one of the hardest to learn and ironically the modern Arabic grammar was actually developed by Iranian scholars not native Arabic scholars. This makes many of the students cannot read and understand Arabic book (e.g. Quran) even after several years of studying.

The golden standards for Arabic language is the Quran. Even as the Quran has huge amount of words (~ 77,000 total words and ~ 15,000 unique words) because of its Semitic origin it only has root words of ~ 2,000 words [1].

To provide a better comparison perspective, English has ~ 10,000 five letter words and the popular word game Wordle only uses about ~ 2000 words. The entire Quran has much less vocabulary to be memorized even compared to the five letter words in English language excluding the other non five letter words that you need to memorize that is probably more than 100,000 words.

[1] The Clear Quran Series Dictionary:

https://theclearquran.org/dictionary/


Knowing fundamentals of the grammar early helps immensely. Not learning them fully, but at least being aware of them.

E.g. the fact that there are 4 verbs moods in French - indicatif, subjonctif, conditionel and imperatif - is usually explained few weeks, if not months, into learning the language. Having been through this, I would've much prefered this to be mentioned right away, because it greatly helps with making sense out of written text. I am a computer programmer however, so it might not apply to everyone.

Ditto goes for verb conjugation comprising two distinct classes - regular and compound. Trivial to explain, yet it's inevitably smeared across several weeks, confusing the heckout of the otherwise trivial subject.

Another e.g., also from French, its alien-structured use of pronoms. This too IMO needs to be explained in first few days into the language. Yes, you can go by example here, but a single hour of explaining this will greatly simplify groking of all the examples that will follow.


Here's a tool for creating your own interlinear texts

https://interlinearbooks.com/tools/interlinearizer/


If you only want to read a language, it can work to make Kindle reading in it your regular bedtime reading, after booting up to a minimum level from Duolingo or high school classes. The app's dictionary lookup and autotranslation are mostly good enough -- not up to a custom interlinear translation, but accessible at a tap. You can ramp up from relatively easy and familiar books like Harry Potter in translation.

At least I'd call it a success for French and Spanish for me. I did check out books with facing-page translations, but there's just less choice of works in that format that I really want to read.

(I'd rather recommend out-of-copyright works on a free reader, among other reasons because you could hack it to try specialized UX like an interlinear translation, or connect it to your favorite spaced-repetition system -- but I'm not familiar with anything with UX as good as the Kindle on a tablet. Suggestions appreciated.)


Speaking of out-of-copyright works, I checked Project Gutenberg and they do have books in various languages. There's probably some way to programmatically search their archive to list books that are available in multiple languages - which might be suitable for creating "interlinear translations".

For example, a list of all their books in Spanish:

https://www.gutenberg.org/browse/languages/es

Or Japanese:

https://www.gutenberg.org/browse/languages/ja

This latter is a short list though, and they're old books with a style of writing that are not friendly for learning the language.

I suppose manual curation is necessary to decide what kind of books are best for this purpose, starting with a smaller vocabulary of simple words.

---

I remember in high school I discovered a book of poems by Pablo Neruda, with the Spanish original and English on pages side by side. It was a wonderful reading experience, and I learned quite a lot from it.

Neruda: Selected Poems (Bilingual Edition)


Thank you for pointing out the Neruda! I have a bilingual book of poetry by Borges -- I feel like for poems the bilingual format is still worth looking for for me, though translations are chancy.


I don't know how much truth there is to it since I'm a relative novice at language learning, but from what I understand reading texts is a) excellent for building language skills and b) pretty terrible for your pronunciation and listening comprehension until you have some level of fluency with the language.

That's according to Marvin Brown who pioneered the AUA school for teaching Thai via only comprehensible input. The school seems to have a pretty solid track record of producing (western) graduates who speak without an accent. Pretty interesting stuff. He called his method Automatic Language Growth or ALG and of course there are strong opinions about whether or not it works as well as it is claimed.

https://en.wikipedia.org/wiki/J._Marvin_Brown


The article shits on the grammar-first approach to language learning because the students end up with knowledge that is isn’t sufficient to read books (I.e. impractical). However, I disagree that it’s the wrong approach. If you teach a student the grammar of a language, then they’re able to fill in the blanks in their vocabulary whenever they find time in life. If, instead, you focus on vocabulary first, then they may never find the time to make the sizeable investment into learning the difficult grammar that they’re missing.

Also the grammar can 10x for some languages - learn the grammar for one Romance language and you’re now much more familiar with the grammar of the rest.


I recently discovered this system and made some scripts to aid interlinear translation. For example https://github.com/enfrte/duo-text

I've also added some basic text manipulation (font-family, size) to the reading text. For example https://interlingue.2038.io/duo-text/


Hmm. I work in the field of second language education. I even read Hamilton’s pamphlet [1] ten years ago when writing a couple of papers on historical controversies about language education [2, 3].

I’m sure the Hamiltonian system works for some learners, especially those who are highly motivated to learn to read the target language. In most language classrooms, though, the method would fail quickly, because most learners of modern languages are more interested in learning to speak. They would also find the Hamiltonion method tedious and intellectually taxing.

However, I do agree with the author that the importance and difficulty of vocabulary acquisition are often overlooked in language education programs. I’m not sure, though, how many people would acquire vocabulary better through the Hamiltonian system than through other methods. There’s a lot of individual variation in learning styles.

When learning Japanese, which I now read, write, and speak fluently, I did a lot of brute-force vocabulary memorization using flashcards while also reading extensively without a dictionary and, at first, without much comprehension. Although I later worked as a Japanese-English translator, when initially learning the language I didn’t use translation at all, and all of the language classes I took were taught only in Japanese. But I don’t know many other people who learned languages successfully in the same way I did, and I myself wouldn’t be able to learn another language that way again.

In any case, if anyone wants to try the Hamiltonian method themselves, textbooks are available at the Internet Archive [4, 5]. Let me know if it works for you.

[1] https://archive.org/details/historyprinciple00hamirich/page/...

[2] http://gally.net/writings/2012_Disputes_about_Language_Teach...

[3] http://gally.net/writings/2012_Which_Languages_to_Teach.pdf

[4] https://archive.org/details/opera00sall/page/n3/mode/2up

[5] https://archive.org/details/corneliusneposad00nepo/page/n9/m...


Fascinating journey.

> I myself wouldn’t be able to learn another language that way again.

Because…?


Partly my age: I began learning Japanese when I was twenty-six, and I am now sixty-five. Language acquisition does get more difficult as we age. But probably more important is that, when I started studying Japanese, I was in a sweet spot of motivation, time, and background.

I had just moved to Japan and was working in Tokyo, first as an English editor and later as an English teacher. I knew no Japanese at first, and daily life was challenging; I was also very curious about the country and culture I had immersed myself in. Every bit of the language I acquired was useful and interesting, and that motivated me to continue studying hard.

I also had no family obligations then, so I had plenty of time to study. Plus I had majored in linguistics in college and had studied other languages previously, so I knew from experience what study methods would work for me.

I now have plenty of time to study another language if I wanted to, and I will have even more time after I retire next year. I should also be more knowledgeable and aware of my language-learning process than I was forty years ago. But I no longer have the practical motivation or the powers of concentration needed to spend three or four hours a day studying a language.


Thanks for the detail you provided. It’s sort of a hobby of mine to learn how people learn languages fluently. I’m your age and I bet either one of us could learn a language as difficult as Japanese if we chose to at this late date.


With a poor grounding in grammar, you will sometimes get the meaning of a sentence completely wrong, even if you separately know all of the nouns, verbs, and particles.


I think the key is to allow the learner to develop an innate understanding of the grammar trough natural usage, instead of memorizing a list of formal grammar rules to apply. That would mean learning a second language like we learn our first. Every child has a very good innate understanding of the grammar of their mother tongue long before they are introduced to concepts of formal grammar, E.G. a 6yo can use verbs without knowing they are called "verbs", and can assign the proper tense long before they are introduced to the formal rules of conjugation. So, as a child learning a mother tongue, the study of formal grammar rules should be introduced eventually to a 2nd language learner, but not before the person has an innate basic understanding of the language.


The idea that adults should learn a second language as if they were toddlers has no merit whatsoever.

For one thing, you don't want to spend three years babbling; you are not small and cute.

Grammar rules don't have to be "formal" (and, anyway, real grammar defies formalization). They are just sentence patterns, with examples.

If you try reading material without front-loading on a working knowledge of, say, the most common 100-200 sentence patterns, you're not working as efficiently as you could be.

The sentence patterns in a grammar dictionary don't give you the grammar, only a way to navigate in the language so you can get the real grammar faster, with less time-wasting guesswork.


There's got to be a browser plugin that allows you to see an interlinear version of whatever you're looking at?


As an avid reader for as long as I can remember (I have a vague memory of being able to read before I spoke, but that might just be quirk of memories from early childhood) it’s an interesting insight that spoken and written vocabularies are different sizes.

I guess it comes from the fact that when we are speaking IRL we have more senses available to us, while in written form we have just that single channel.

Anecdotally, I often find myself reading and notice a word I feel is familiar, but wouldn’t be able to give a clear definition. Majority of the time I will just skip over it and continue reading. I guess there is a level of redundancy there. After moving to the United States I had a manager who would use the word “Pollyanna” at least once a week in conversation and look me over a year of that before I finally looked it up to see what they were meaning.

From a cognitive & neurological point of view it’s pretty interesting that we can learn such a large written word vocabulary with these words that have such a low frequency of occurrences.


Interesting that the author didn't mention Pimsleur. It is exactly how they teach you a language.


Love pimsleur. I’ve used it for several languages. Only thing is it kind of leaves you at the intermediate level and I don’t know how to progress beyond that, apart from immersion.


Ugh - this makes much more sense than the "modern" grammar based approaches. Heck I might have a few more languages under my belt had I known this was even a thing! Time to do some more digging!


Interesting. I made a lot of progress learning Spanish via online films interchanging the subtitles and soundtrack in both languages. A form of inter linear learning as explained in this piece?


“Hamilton also revised the word order of the original text to conform to the word order of modern languages”

This gave me pause.


A somewhat tangential aside came to mind as I neared the end of these two paragraphs:

> Hamilton (1769–1831) is important because he was one of the last major proponents of a pedagogical tradition, extending from antiquity, that made the study of texts the dominant focus of the teaching of foreign languages. In this method, teachers explicated the literal meanings of the words, phrases, and sentences of those texts. But by the 18th century, such disclosure was under frontal attack. Teachers had settled on gram­mar as the main subject matter, and students were expected to provide the meanings of texts by themselves, aided by a dictionary.

> In the last half of the 20th century, an explosion of computer-based studies of large texts, called “corpora,” has demonstrated that the number of words needed to read foreign-language books exceeds by several multiples the amount of vocabulary that is acquired by most foreign-language students. This huge vocabulary gap explains why it is impossible for most students to read extensive, sophisticated materials in foreign languages. Even many who are academically involved with foreign languages must depend heavily on dictionaries, consult translations, and accept reading with blind spots because of time constraints.

To me this translates alarmingly eagerly to the field of computer science:

"... the number of words needed to understand contemporary source code exceeds by several multiples the amount of vocabulary that is acquired by most developers. This huge vocabulary gap explains why it is impossible for many engineers to properly grok extensive, sophisticated codebases."

The "hrm" part:

"Even many who are academically involved with computer science must depend heavily on Google, consult Stack Overflow, and accept reading with blind spots because of time constraints."

(Something something academic vacuum versus real world)

I also have a bit of pause as I test to see if I can easily find correlation a little further back:

"... a pedagogical tradition, extending from antiquity, that made the study of source code the dominant focus of the teaching of foreign languages. In this method, teachers explicated the literal meanings of the symbols and constructions of the code. But by (???), such disclosure was under frontal attack. Teachers had settled on gram­mar as the main subject matter, and students were expected to provide the meanings of texts by themselves, aided by a dictionary.*

...I think the second half of that last sentence explains modern whiteboard hiring!!

As for the grammar part, there's definitely a difficult-to-pin-down imprecision of expression in modern programming nowadays, an indirect inefficiency, a verbosity that is so semantically saturating that the brain can only handle it by fragmenting into a thousand pieces that can no longer see the bigger picture. Whatever this fundamental thing is... Java, every time you needed to write boilerplate in any language, and the nodejs ecosystem, are all like 5th-order side effects of it. I genuinely wonder what it is.

While wondering about the existence of possible parallel timelines to the present grammar-laden reality I remembered an interview with Arthur Whitney on programming (https://queue.acm.org/detail.cfm?id=1531242), quoted here with a bit of context:

> [BC] Right. People are able to retain a seven-digit phone number, but it drops off quickly at eight, nine, ten digits.

> [AW] If you're Cantonese, then it's ten. I have a very good friend, Roger Hui, who implements J. He was born in Hong Kong but grew up in Edmonton as I did. One day I asked him, "Roger, do you do math in English or Cantonese?" He smiled at me and said, "I do it in Cantonese because it's faster and it's completely regular."

> [BC] This raises an interesting question. When I heard about your early exposure to APL, a part of me wondered if this was like growing up with tonal languages. I think for most people who do not grow up with a tonal language, the brain simply cannot hear or express some of the tone differences because we use tone differently in nontonal languages. Do you think that your exposure to this kind of programming at such a young age actually influenced your thinking at a more nascent level?

> [AW] I think so, and I think that if kids got it even younger, they would have a bigger advantage. I've noticed over the years that I miss things because I didn't start young enough.

> [BC] To ask a slightly broader question, what is the connection between computer language and thought? To what degree does our choice of how we express software change the way we think about the problem?

> [AW] I think it does a lot. That was the point of Ken Iverson's Turing Award paper, "Notation as a Tool of Thought." I did pure mathematics in school, but later I was a teaching assistant for a graduate course in computer algorithms. I could see that the professor was getting killed by the notation. He was trying to express the idea of different kinds of matrix inner products, saying if you have a directed graph and you're looking at connections, then you write this triple nested loop in Fortran or Algol. It took him an hour to express it. What he really wanted to show was that for a connected graph it was an or-dot-and. If it's a graph of pipe capacities, then maybe it's a plus-dot-min. If he'd had APL or K as a notation, he could have covered that in a few seconds or maybe a minute, but because of the notation he couldn't do it.

> Another thing I saw that really killed me was in a class on provability, again, a graduate course where I was grading the students' work. In the '70s there was a lot of work on trying to prove programs correct. In this course the students had to do binary search and prove with these provability techniques that they were actually doing binary search. They handed in these long papers that were just so well argued, but the programs didn't work. I don't think a single one handled the edge conditions correctly. I could read the code and see the mistake, but I couldn't read the proofs.

> Ken believed that notation should be as high level as possible because, for example, if matrix product is plus-dot-times, there's no question about that being correct.

> [BC] By raising the level of abstraction, you make it easier for things to be correct by inspection.

> [AW] Yes. I have about 1,000 customers around the world in different banks and hedge funds on the equity side (where everything's going fine). I think the ratio of comment to code for them is actually much greater than one. I never comment anything because I'm always trying to make it so the code itself is the comment.

> [BC] Do you ever look at your own code and think, "What the hell was I doing here?"

> [AW] No, I guess I don't.

As I read the million-dollar paragraph (the chunky one describing Fortran) there's a point where I have a tiny fleeting bit of hesitancy:

> What he really wanted to show was that for a connected graph it was an or-dot-and. If it's a graph of pipe capacities, then maybe it's a plus-dot-min.

My brain: "That's cheating!! You're just mapping if-this-then-that! WoN't SoMeBoDy PlEaSe ThInK oF tHe GrAm--"

Me: "...haaaang on. Why am I trying to defend this so strongly? Why do I think this is the only right way??"

This is really weird. I genuinely feel like the goal of learning to program is to hammer grammar into my stupid brain, and that to do any less (to do what feels like copying and pasting mnemonics) is to fundamentally cheat in a way that will disservice me more significantly than almost anything else.

It's like the point of learning is to be a grammar parser.

Hmmmm. Wat do :(

There was also a recent article on here about a natural polyglot who learned languages for fun (https://news.ycombinator.com/item?id=30920287), and while there was a bit of snipping in the comments about the article's presentational style and hype, I found the content itself thought-provoking and impressioning, and I'm reminded of this segment:

> For two hours, Vaughn works through a series of tests, reading English words, watching blue squares move around and listening to languages, some he knows and some he doesn’t. ...

> Each [MRI] image essentially breaks down his entire brain into two-centimeter cubes and monitors the amount of blood oxygen in each one. Every time the language-processing areas are activated, those cells use oxygen, and blood flows in to replenish them.

> By watching where those changes happen, the researchers can pinpoint exactly which parts of Vaughn’s brain are used for language.

> On the screen Malik-Moraleda is watching, it all looks like unchanging shades of gray. ... my brain scan looks the same.

> But after a week, the scans have been analyzed to produce two colorful maps of our brains.

> I’d assumed that Vaughn’s language areas would be massive and highly active, and mine pathetically puny. But the scans showed the opposite: the parts of Vaughn’s brain used to comprehend language are far smaller and quieter than mine. Even when we are reading the same words in English, I am using more of my brain and working harder than he ever has to.

This leads me to the argument/question/thought experiment/idea that grammar is just a macro-scale inefficiency that effectively wastes effort "just because" in the same way bureaucracy does "because scaling is hard". My question is about what dynamics drive that, and how to close the loop and become more efficient.


https://www.latinum.org.uk/shadowing/interlinear-method lists various old interlinears. Though even for the Latin-English interlinears it's not complete: for example it's missing the gonzo Completely Parsed series (eg. Gallic War chapter 1: https://archive.org/details/CommentariesOnTheGallicWarCaesar... ), and the Fully Parsed me-too series. I'm working on a summary of the old Latin-English and Latin-Ancient Greek interlinears but I don't know when it will be finished.

There are two modern commercial publishers I'm aware of which specialise in interlinears: https://hyplern.com and https://interlinearbooks.com/ .

Some people transcribed one of the Hamilton et al. interlinears, Cornelus Nepos' Vitae, into WikiSource: https://la.wikisource.org/wiki/Cornelii_Nepotis_Vitae_(Hamil... .

More modern (for the most part) but not at all unrelated: parts-of-speech tagging https://en.wikipedia.org/wiki/Part-of-speech_tagging and treebanks https://en.wikipedia.org/wiki/Treebank . https://syntacticus.org/ has a UI which is almost usable for learning-by-reading: https://syntacticus.org/sentence/proiel:20180408:caes-gal:52... . (The Perseus DL also has a classical-Latin treebank: https://perseusdl.github.io/treebank_data/ .)

Seemingly there's still no proper file-format supporting interlinears. Naturally enough: they're not straightforwardly recursive and tree-structured so the computing world's main response is to pretend that they don't exist lolz. (Paging Ted Nelson.) But it seems that interlinears are still very much in use among some research communities in linguistics, so every few years someone puts out a plaintive paper begging for someone to do something about the lack of a proper file format, see eg. https://aclanthology.org/L04-1143/ and https://dl.acm.org/doi/10.1145/3389010 , and then of course nothing happens.


I've just found another publisher of classical interlinears: http://www.classicalbooks.co.uk , and a journal paper about it from the proprietor, "Using translation-based CI to read Latin literature": https://www.cambridge.org/core/journals/journal-of-classics-...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: