Hacker News new | past | comments | ask | show | jobs | submit | aecs99's comments login

Hi Peter, thanks for the AMA.

Are there any benefits to converting from a green card to citizenship, in the context of employment? Access to some government jobs is the only thing I can think of.

Can green cards be extended every time they are about to expire until someone retires, or is there a limit?


You can't lose your citizenship even if you decide to live abroad or get convicted of certain crimes and you can lose your green card; you can vote in federal elections as a citizen; you can sponsor your parents and siblings for green cards. It's really a personal decision, I think. And there's no limit on how many times a green card can be renewed.


An green card holder is a permanent resident. I think they will keep renewing you unless for example you didn't file taxes or committed a serious crime or things like that.

https://www.stilt.com/blog/2020/05/green-card-renewal-denied...

Also green card holder cannot live outside the US too long or they may revoke it. You can pre-apply for a reentry permit to get a longer window.

https://www.stilt.com/blog/2020/07/can-i-stay-more-than-6-mo...

Citizenship removes all those restrictions. Also you can petition for your parents to immigrate immediately.


Amazing idea and execution. If you don’t mind revealing, can you describe the ML components in the background? Model, data trained on, etc.? Thanks and good luck!


Please keep in mind that the tips/tools/techniques that people list are highly subjective and may not be applicable everywhere. I realized this from my own experience (worked at two startups in the past, and now at BigCo):

* Glassdoor: Sort reviews by date (people are relatively happy when they join, negative reviews tend to be more current). Take every review (good/bad) with a grain of salt. Reviews with higher "helpful" counts could be more reliable because there are so many companies where the HR writes glowing reviews

* Crunchbase: To know more about the funding situation, list of execs, etc.

* levels.fyi: Salary info. Also check h1bsalaryinfo (website). May/may not be applicable to you, but will give you an idea of what your peers potentially make.

* LinkedIn: Search the name of the company, click on the "People" tab, and look at the list of people who match your background, and then dig out their backgrounds. How long have they been working at the company, what are their previous jobs, how well do you think their background support their roles (even before you talk to them). At my previous company, there were a bunch of people who did sub-par job of being software engineers at their previous companies, and took director/lead/vp roles purely based on their years or experience rather than technical expertise and screwed up the mission big time. This tends to happen at medium/big companies more frequently than at small startups.

* On-site interviews: I know it is going to be extremely hard to gauge a team/company while they are gauging you, but this can be done. Prepare your questions beforehand, and try to see how close or vague their responses are.

* Blind (app) or teamblind (on web): Search for the company and look for comments/questions etc. There may be some discussions which may not be applicable, but it doesn't hurt to search.

* GitHub/Medium: Some companies/startups have their own pages, or at least their team members do (although not current). It'd be good to check those as well.

* Cold emailing: Previous and current employees. Not many will respond, but if even one or two do, I'd highly value their input

* Coffee: Doesn't hurt to try (most people are willing to do this, but don't do this just because they don't have time, or feel awkward about meeting new people)

Most of these will be applicable for smaller companies/startups, but at big companies, it'll be tough knowing all the people you will work with. Ultimately, it's going to boil down to few things:

(a) your instincts: whether you feel you'll fit right in or not, whether the company is good for you or not, whether you believe in the core mission or not. I tend to believe this more.

(b) your team: a good team/team member can make your life at work the best experience or the worst experience. And unfortunately, there is no way to predict this unless you give it a chance.

Good luck!


I agree. The more proactive you are about reaching out or asking the right questions, the more you'll know. Thanks for the detailed suggestions. I personally think Glassdoor is pretty outdated. Blind is more 'real time'


So true! I've interviewed with several startups (about 50-60) in the past, over the course of 5 years. I had offers from most of them, while some of them rejected me after the interviews (for whatever reasons they had). Your comment is so close to reality (based on my interactions). Some laugh, some genuinely have no clue, some act arrogant (you can either join based on whatever limited information is provided, or leave), and some say they don't share any such information.

I've worked at two startups in the past. The first startup tanked. When interviewing for the next role, I did ask these questions, but had no luck. Ended up taking an offer with 15% increase over my last role. Two years of work, and I find out that this startup too, is on its way down. Eventually ended up moving to a big company.


I don't think it would be a stretch to think the ones that can answer most of these would be the outliers that make it past two years.


You must have been doing multiple interviews per month, every month, for those five years in order to get to the stage of getting an offer from a majority of the 60 companies. That sounds exhausting!


I did. Once I joined the companies that did not openly answer my questions during interviews (about finances, strike price, etc.), I ended up realizing all the negatives/problems of the companies from the inside.

Then on, I had only two choices: (1) ignore the problems and not worry about future, or (2) act fast and start interviewing until I have options if something goes wrong. I chose the second option, and hence a lot of interviewing.


Some good points by others on this post. I want to focus on something more latent. Are you sure there are no other distractions in your life?

Try to find out if there is something deeper than surface level that is bothering you. Such distractions can be of any type: lack of interest in what you do (or rather, the constant desire of doing something else), homesickness, heartbreak, financial hardship, issues with family or friends, etc. If there are any such reasons, you should try addressing them first. Either solve them, or get some help in coping with factors you cannot solve.


I didn't know it in my 20s, but I was completely distracted by chasing girls to find one that I thought is perfect. Once I found my wife in my 30s, my ability to focus and learn went through the roof. Wish I had this focus when I was in school.

My suggestion is to only have 1 or 2 goals and viciously cut everything else out. I've learnt to say no to other distractions. In fact, I only have a 2nd goal around because there is waiting involved in 1st goal.


I personally know two people who quit high paying industry jobs and moved to reputable research grant universities (within US) as Assistant Professors in their late 30s. This was about 8 years ago. Fast forward to the present, one guy is a full professor while the other is still an associate professor. The difference was that one used his expertise and experience to write and get more grants, publish more papers, and is popular for his research (both inside and outside the university). The other is known for his teaching (within the university), took it slow and didn't publish as much.

I also know another guy who got his PhD, moved to academia (research lab and all), quit and moved to gaming industry to code, worked for 5 years, and now moved back to academia (once again, research lab and all). Then there is another person who got his PhD, worked as a post-doc, worked as an Assistant Professor, quit because he didn't enjoy it, and now is working next to me, enjoying an industry position.

I think all of them are truly enjoying what they do. I guess the question for you is what would you like the most?

There are several questions that you'd have to answer for yourself:

(1) Are you in for teaching? Or are you into research, i.e., having freedom in what you work on? Keep in mind that if you join as a Assistant Prof. on a tenure-track role, you'd still have to prove yourself in the long run. This could mean working on some projects that you may/may not enjoy in the short term.

(2) If you are in for the teaching, do you care about where you teach? Community colleges or small universities are always looking for people to teach (as a full time professor, or as a part time lecturer). Do you differentiate between these as much over the love of teaching?

(3) You probably could try out guest lecturing to check if you truly enjoy teaching. Or maybe teach just for a semester, if that is any appealing.

I personally have a PhD, wanted to be in academia for a long time but jumped to industry for numerous personal reasons. This is my 5th year in industry and I love what I work on. However, I still feel that my heart is in academia. To get a reality check, I'll be guest-/co-lecturing several sessions of a course at a public university this fall. I'm curious how things will turn out. Good luck to you too!


I worked with Chernoff faces long time back and love how this is an interesting way to visualize how discriminative your features are.

The idea is that you take features of your dataset, and use those to represent a face. Say for example, you want to classify 100 people based on different features. And let's say you've collected 15 features for each person (e.g., height, weight, shoulder width, length of first name, length of last name, type of car driven, etc.). Now try mapping each of these features to Chernoff faces. You'd map it in the following manner: height->area of face, weight->shape of face, shoulder width->length of nose, length of first name->location of mouth, length of last name->curve of smile, type of car driven->width of mouth, etc.

Once you've mapped in that fashion and visualize the faces, you can observe how discriminative your features are. How do you interpret this? If your Chernoff faces tend to show a lot of variation in expression (e.g., smiling vs. sad), you say the length of last name is more discriminative. On the other hand, if the faces all appear to have same area, your first feature (i.e., height) is not very discriminative.

Other features used for Chernoff faces could be: location, separation, angle, shape, and width of eyes; location, and width of pupil; location, angle, and width of eyebrow, etc.

One drawback (as listed in the Wikipedia page) is that we humans perceive the importance of these faces by the way in which variables are mapped to the Chernoff facial features. If the feature mapping is not carefully chosen, your largest varying feature may be ignored because we appreciated the change in expression more than the change in eyebrow length.


> this is an interesting way to visualize how discriminative your features are.

I don't get why that is easier or more revealing than doing a principal component analysis (?)


Because recognising faces is an innate human ability while calculating eigenvectors is not.


That's why we get computers to calculate eigenvectors and plot the result in a nice graph that humans are fine at interpreting.


I agree. I don't think this isn't any more revealing. Just a different way of visualizing. And there is also the drawback of a whole new interpretation when you re-map your input features to Chernoff facial features.


That's what I'm struggling with. Take the example on the Wikipedia page: I don't know what the faces mean for each judge. Is the judge a jerk? Does the judge take too long with their arraignments? Is this a value, er, judgment on the legal sufficiency of their determinations?

If the primary utility is to be able to quickly visually discriminate values once you know how they're encoded into facial features, then I can see the value, but again you'd have to know the encoding. Or have I missed the point completely?


Yes, the primary utility is to understand how discriminative your features are. There is no meaning of what each face represents.

Checkout the Chernoff Fish demo posted below by the user meagher here: https://news.ycombinator.com/item?id=16664051. Play with different features, say for example, 'performance'. When you change the value of 'performance', the eye size changes. However, the eye size doesn't mean anything except for you to visually understand variations in data. If 'performance' was mapped to, say, fin size, it doesn't change its meaning.


I wonder how this affects pattern recognition.

Would people more readily recognize that say, "Large Spiky Orange Fish" strategies lead to greater returns, compared to if the strategies were presented as "Short | Value Investment | Large Market Capitalization"?

This could also be an interesting way of eliminating inherent bias while leveraging human pattern recognition abilities. Represent values pictorially, and hide data labels.


Thanks, I think I understand its utility a bit better now.


It's not meant for professional data analysis use. It's a gimmick for kids or for printing in a magazine article.


Because humans are naturally good at recognizing faces and the differences between them.



I lean towards what Velodyne is saying in this situation. I have been working with LiDAR systems for over 4 years of which the last 1.5 years have been towards building autonomous driving vehicles. When I saw the videos, I was truly baffled by how a LiDAR can miss that. I worked with different types of LiDARs (from different manufacturers) and there is a very high chance that the LiDAR point cloud contains all the information corresponding to the person and the bicycle to make a decision.

What we need to keep in mind is that sensing an object is different from deciding whether or not to take an action (e.g., hitting brakes, raising alarms, swerving, etc.).

Most LiDAR/RADAR/Camera manufacturers only provide input data. It's like saying "hey, I see this". It's up to the perception software to decide whether or not to make a decision.

In most cars, relatively simpler decisions are made by the car's perception software (e.g., adaptive cruise control, lane change warning, automatic braking, etc.).

Self-driving companies override such systems, and rewire the car such that it is their perception software that makes the decision. So the onus is completely on the self-driving company's software. In this case, it is the perception software developed by Uber to be critiqued - not Velodyne, not Volvo, not the camera manufacturer.

It looks like the engineers at Velodyne feel confident that they should (and would have) sensed the person, and hence their statement. I wouldn't doubt them much as they have been in the LiDAR game since DARPA days when self driving was considered experimental.

From a different angle, Velodyne may not have much to loose by throwing Uber under the bus - especially when compared to how much their reputation is at stake. This is because Velodyne has several big customers (e.g., Waymo, and almost every other self-driving, mapping company that is serious about getting big).

NTSB should and will get access to the point clouds. Uber has a choice of releasing the point clouds to the public - but I highly doubt they will.


If you worked with LIDARs, maybe you know how much noise do they give in the output? Could not it be that Uber software filtered pedestrian image out as a noise, for example because there was no matching object on a camera or because reflections from the bike looked like a random noise?


Both effects you mention (sensor fusion problem between camera/lidar; spotty lidar reflections from bike) are possible.

These problems probably should not have prevented detecting this obstacle, though. But, a lot depends on factors like the range of the pedestrian/bike, the particular Velodyne unit used, and the mode it was used in.

One key thing is that lidar reflections off the bike would have been spotty, but lidar off the pedestrian's body should have been pretty good. That's a perhaps 50-cm wide solid object, which is pretty large by these standards. But the number of lidar "footprints" on the target depends on range.

You'd have to estimate the range of the target (15m?) and compute the angle subtended by the target (0.5m/15m ~= 0.03 radian ~= 2 degrees), and then compare this to the angular resolution of the Velodyne unit to get a number of footprints-on-target.

Perhaps a half dozen, across a couple of left-to-right scan lines. Again, depending on the scan pattern of the particular Velodyne unit in use. The unit should make more than one pass in the time it took to intersect the pedestrian.

This should be enough to detect something, if the world-modeling and decision-making software was operating correctly, hence the puzzlement.


They do have noise, but we are talking about milimeter to centimeter scale (accuracy is < 2cm). So a grown up person is roughly 2 orders of magnitude bigger than the accuracy of the scanner.

To give an example how big (or small) this noise would have been in this situation I did a very simple virtual scan of a person with a bicycle at a distance of 15 meters [1]

It was scanned with a virtual scanner inside our sensor simulation software, so this is not the real data and should be taken with a grain of salt.

[1] http://www.blensor.org/blog_entry_20180323.html


It is not possible for the algorithm looking at the LIDAR input data to have the same level of discrimination as humans, so this would be a possibly in my opinion.


I don't understand why everyone is focusing so heavily on LIDAR being at fault or not at fault. The car does have a RADAR as well, which would not help detecting the pedestrian but most certainly the bike she was moving along. I don't know the field of view of the radar but that should have caused an emergency brake as well, shouldn't it?


> an emergency break

I think half the people commenting on this incident have misspelled "brake". It's odd, because that's not something I've observed as a common error before.


Happens in every single thread involving cars and braking. Every one!


thanks for pointing that out. I fixed it, and have absolutely no idea how I made the mistake in the first place.


From a recent publication in an IEEE conference related to intelligent vehicles:

"Radar is robust against bad weather, rain and fog; it can measure speed and distance of an object, but it does not provide enough data points to detect obstacle boundaries, and experimental results show that radar is not reliable to detect small obstacles like pedestrians."

This would be because the wavelength of lidar is in the micron range while that of vehicle-detection radar is in the mm-cm range. You won't be able to reliably get radar reflections off of mm/cm-scale objects or object elements, or accurately (<1cm) localize object boundaries. Good navigation would require tighter localization.

Radars are really good, though, for detection of objects, including identifying moving objects, close up -- canonical examples being walls and other vehicles. Radar sensors are rather cheap (having been in mass production for a long time) so it's common to have one on every bumper or corner.


If it were "only" a human then the radar would have a hard time seeing the person. But the bicycle is a substantial chunk of metal which should give much stronger echo.

And a 10 kg. metal object placed in the path of an autonomous car should cause all kinds of emergency measures to engage.

Many high end cards have auto braking systems based on radar (and additional sensors). Mercedes even has a similar scenario on their product page [1]

Volvo does so too, as far as I could find.

So not only did they not detect the obstacle with two different sensor technologies they may have deactivated the already existing safety features in that car as well.

[1] https://www.mercedes-benz.com/en/mercedes-benz/innovation/on...


To be pithy - humans are big absorptive sacks of water, radar sees us poorly.


Would a bunch of plastic bags filled with stuff (plastic bottles, clothes, etc..) tied to a bike be something the LiDAR would see as part of the roadway rather than a set of distinct objects?


I currently work full-time in the self-driving vehicle industry. I am part of a team that builds perception algorithms for autonomous navigation. I have been working exclusively with LiDAR systems for over 1.5 years.

Like a lot of folks here, my first question was: "How did the LiDAR not spot this?". I have been extremely interested in this and kept observing images and videos from Uber to understand what could be the issue.

To reliably sense a moving object is a challenging task. To understand/perceive that object (i.e., shape, size, classification, position estimate, etc.) is even more challenging. Take a look at this video (set the playback speed to 0.25): https://youtu.be/WCkkhlxYNwE?t=191

Observe the pedestrian on the sidewalk to the left. And keep a close eye on the laptop screen (held by the passenger on right) at the bottom right. Observe these two locations by moving back and forth +/- 3 seconds. You'll notice that the height of the pedestrian varies quite a bit.

This variation in pedestrian height and bounding box happens at different locations within the same video. For example, at 3:45 mark, the height of human on right wearing brown hoodie, keeps varying. At 2:04 mark, the bounding box estimate for pedestrian on right side appears to be unreliable. At 1:39 mark, the estimate for the blue (Chrysler?) car turning right jumps quite a bit.

This makes me believe that their perception software isn't as robust to handle the exact scenario in which the accident occurred in Tempe, AZ.

I think we'll know more technical details in the upcoming days/weeks. These are merely my observations.


Alright, so given your observations, which I don't doubt, here's a question I have: why have a pilot on public roads?

If uber's software wasn't robust, why "test in production" when production could kill people?


> If uber's software wasn't robust, why "test in production" when production could kill people?

Because it's cheap. And Arizona lawmakers apparently don't do their job of protecting their citizens against a reckless company that is doing the classic "privatize profits, socialize losses" move, with "profits" being the improvements to their so-called self-driving car technology and "losses" being random people endangered and killed during the process of alpha-testing and debugging their technology in this nice testbed we call "city", which conveniently comes complete with irrationally acting humans that you don't even have to pay anything for serving as actors in your life-threatening test scenarios.


Disclaimer: I am playing Devils Advocate and I don't necessarily subscribe to the following argument, but:

Surely it's a question of balancing against the long term benefit from widely adopted autonomous driving?

If self driving cars in their current state are at least close to as safe as human drivers, then you could argue that a short term small increase in casualty rate to help development rate is a reasonable cost. The earlier that proper autonomous driving is widely adopted, the better for overall safety.

More realistically, if we think that current autonomous driving prototypes are approximately as safe as the average human, then it's definitely worthwhile - same casualty rate as current drivers (i.e. no cost), with the promise of a much reduced rate in the future.

Surely "zero accidents" isn't the threshold here (although it should be the goal)? Surely "improvement on current level of safety" is the threshold?


You can make the argument with the long-term benefits. But you cannot make it without proper statistically sound evidence about the CURRENT safety of the system that you intend to test, for the simple reason that the other traffic participants you potentially endanger are not asked if they accept any additional risk that you intend to expose them to. So you really need to be very close to the risk that they're exposed to right now anyway, which is approximately one fatal accident every 80 million miles driven by humans, under ANY AND ALL environmental conditions that people are driving under. That number is statistically sound, and you need to put another number on the other side of the equation that is equally sound and on a similar level. This is currently impossible to do, for the simple fact that no self-driving car manufacturer is even close to having multiple hundreds of millions of miles traveled in self-driving mode in conditions that are close enough to real roads in real cities with real people. Purely digital simulations don't count. What can potentially count in my eyes is real miles with real cars in "stage" environments, such as a copy of a small city, with other traffic participants that deliberately subject the car to difficult situations, erratic actions, et cetera, of which all of them must be okay with their exposure to potentially high-risk situations.

Of course that is absurdly expensive. But it's not impossible, and it's the only acceptable way of developing this high-potential but also highly dangerous technology up to a safety level at which you can legitimately make the argument that you are NOT exposing the public to any kind of unacceptable additional risk when you take the super-convenient and cheap route of using the public infrastructure for your testing. If you can't deal with these costs, just get the fuck out of this market. I'm also incapable of entering the pharmaceuticals development market, because even if I knew how to mix a promising new drug, I would not have the financial resources to pay for the extensive animal and clinical testing procedures necessary to get this drug safe enough for selling it to real humans. Or can I also just make the argument of "hey, it's for the good of humanity, it'll save lives in the long run and I gave it to my guinea pig which didn't die immediately, so statistically it's totally safe!" when I am caught mixing the drug into the dishes of random guests of a restaurant?


It's an n of 1, but we're nowhere close to 'human driver' levels of safe.

Humans get 1 death per 100 million miles.

Waymo/Uber/Cruise have <10 million miles between them. So currently they're 10 times more deadly. While you obviously can't extrapolate like that, it's still damning.

If you consider just Uber, they have somewhere between 2 and 3 million miles, suggesting a 40x more deadly rate. I think it's fair to consider them separately as my intuition is that the other systems are much better, but this may be terribly misguided.

This is a huge deal.

I honestly never thought we'd see such an abject failure of such systems on such an easy task. I knew there would be edge cases and growing pains, but 'pedestrian crossing the empty road ahead' should be the very first thing these systems are capable of identifying. The bare minimum.

This crash is going to result in regulation, and that's going to slow development, but it's still going to be justified.


I have the same questions as well. But my best guess is that they probably have permission to drive at non-highway speeds at late nights/early mornings (which is when this accident occurred, at 10 PM).

My first reaction when I watched that video was that my Subaru with EyeSight+RADAR would have stopped/swerved. Even the news articles state something similar (from this article: https://www.forbes.com/sites/samabuelsamid/2018/03/21/uber-c...)

>The Volvo was travelling at 38 mph, a speed from which it should have been easily able to stop in no more than 60-70 feet. At least it should have been able to steer around Herzberg to the left without hitting her.

As far as why test this, I'm guessing peer pressure(?). Waymo is way ahead in this race and Uber probably doesn't wanna feel left out, maybe?

Once again, all of these are speculations. Let's see what NTSB says in the near future.


I live here and they drive around at all times of the day and don't seem to have any limitations. They've been extremely prevalent and increasing in frequency over the past year. In fact, it's unusual _not_ to see them on my morning commute.


> At least it should have been able to steer around Herzberg to the left without hitting her.

Does the car have immediate 360 degrees perception? A human would have to look in one or two rear view mirrors before steering around a bike, or possibly put himself and others in an even worse situation.


Sorry but that's just wrong behaviour IMO.

If you're about to hit a pedestrian and your only option is to swerve, then you swerve. What could you possibly see in the rear view mirror that would change your reaction from "I'm gonna try to swerve around that pedestrian" to "I'm gonna run that pedestrian over"? Another car? Then you're going to take your chance and will turn in front of that car! The chance that people will survive the resulting crash are way higher than the survival rate of a pedestrian being hit at highway speeds.


You should always be aware when driving of where your "exits" are. This is not hard to do. Especially at 38 MPH, you can be extremely confident there are no bikes to your left if you have not passed any in the past couple seconds. And, lanes are generally large enough in the US that you can swerve partway into one even if there are cars there.


If everybody is driving in the same speed on all lanes, which is not unlikely on that kind of road, I generally am not confident that I can swerve into another lane _and slam the brakes_ without being hit. If I am hit, the resulting impact speed with the bike could be even worse than if I just slammed the brakes, so I don't think it's really a given.

You also cannot decide in 1 second what would happen if the pedestrian were to freeze, and whether you'd end up hitting him/her even worse by swerving left.

Most people in that situation would just brake, I think.


Because Uber wanted that.

Other self-driving car companies (like Google (or whatever they renamed it)) have put a lot more work into their systems and done a much greater degree of due diligence in proving their systems are safe enough to drive on public roads. Uber has not, which is why they've been kicked out of several cities where they were trying to run tests. But Tempe and Arizona is practically a lawless wasteland in this regard and is willing to let Uber run amok on their roads in the hopes that it'll help out the city financially somehow.


I'm assuming LiDAR is not the only sensor installed in self-driving cars. Isn't that the case? And in this scenario, the software didn't have a lot to process. Road was empty, pedestrian was walking bike in hand perpendicular to road traffic...

Even if the detection box changed in size, it should have detected something. Tall or short, wide or narrow, static or moving... at least it should apply brakes to avoid collision.


I'm really surprised that we're even talking about the pedestrian's clothes or lighting or even the driver. Isn't the entire point of sensors like LiDAR to detect things human beings can't? The engineering is clearly off.


LIDAR works by shining a laser beam of certain wavelength. If some object completely absorbs that wavelength, there's no way LIDAR can see it.


Is it possible for car to do some calibration of some sort to decide what is current "sensor visibility"? Like a human would do in a fog. Is this a common practice to use this information to reduce or alter speed of the car?


Great question. At least in our algorithms we do this - to adjust the driving speed based on the conditions (e.g., visibility or perception capabilities).

At the end of the day, you can drive only as fast as your perception capabilities. A good example of that is how fast humans can perceive when influenced by drugs/alcohol/medications vs. when uninfluenced.

What is baffling is the fact that the car was driving at 38 mph in a 35 mph zone. This should not happen regardless of how well/poor your sensing/perception capabilities are.


Maybe the question isn't why the LIDAR didn't spot it. I feel it's more likely it did spot it, but couldn't make the correct decision.


You summed up all my speculations in one sentence


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: